Bug 45939

Summary: Add entry points to GraphicsContext3D needed for Chromium compositor port
Product: WebKit Reporter: Kenneth Russell <kbr>
Component: WebCore Misc.Assignee: Kenneth Russell <kbr>
Status: RESOLVED FIXED    
Severity: Normal CC: cmarrin, darin, hausmann, jamesr, senorblanco, vangelis, webkit.review.bot
Priority: P2    
Version: 528+ (Nightly build)   
Hardware: All   
OS: All   
Bug Depends on:    
Bug Blocks: 45912    
Attachments:
Description Flags
Patch cmarrin: review-, kbr: commit-queue-

Kenneth Russell
Reported 2010-09-16 18:09:48 PDT
In order to host Chromium's compositor on top of GraphicsContext3D, a couple of Chromium-specific extensions need to be exposed, and an additional boolean argument to the constructor is needed indicating whether the GraphicsContext3D instance is supposed to render directly to the passed HostWindow, or be treated as an off-screen context per the current semantics.
Attachments
Patch (17.42 KB, patch)
2010-09-16 18:20 PDT, Kenneth Russell
cmarrin: review-
kbr: commit-queue-
Kenneth Russell
Comment 1 2010-09-16 18:20:12 PDT
Created attachment 67871 [details] Patch From the ChangeLog: Added entry points for two Chromium-specific extensions, and added a flag to the GraphicsContext3D constructor, currently unsupported by all ports (including Chromium), indicating whether the context should render directly to the passed HostWindow or off-screen per the current semantics. The switch to use GraphicsContext3D in Chromium's compositor will follow in a subsequent patch. No new tests; functionality is unchanged. Built and tested Chromium and WebKit on Mac OS X.
WebKit Review Bot
Comment 2 2010-09-16 18:23:17 PDT
Attachment 67871 [details] did not pass style-queue: Failed to run "['WebKitTools/Scripts/check-webkit-style']" exit_code: 1 WebKit/chromium/src/GraphicsContext3D.cpp:56: Alphabetical sorting problem. [build/include_order] [4] Total errors found: 1 in 10 files If any of these errors are false positives, please file a bug against check-webkit-style.
James Robinson
Comment 3 2010-09-16 18:29:23 PDT
Comment on attachment 67871 [details] Patch Looks good, just a few nits to address before landing: - this patch uses UNUSED_PARAM() many times when it doesn't need to. If all permutations of a function will not use a parameter you can just avoid naming the parameter in the .cpp. UNUSED_PARAM is more for this case: void foo(int parameter) { #ifdef SOMETHING bar(parameter); #else UNUSED_PARAM(parameter); #endif } Also, consider changing renderDirectlyToHostWindow from a bool to an enum. View in context: https://bugs.webkit.org/attachment.cgi?id=67871&action=prettypatch > WebCore/platform/graphics/mac/GraphicsContext3DMac.mm:103 > + UNUSED_PARAM(renderDirectlyToHostWindow); instead of UNUSED_PARAM() just don't name the parameter > WebCore/platform/graphics/qt/GraphicsContext3DQt.cpp:506 > +GraphicsContext3D::GraphicsContext3D(GraphicsContext3D::Attributes attrs, HostWindow* hostWindow, bool renderDirectlyToHostWindow) > : m_internal(new GraphicsContext3DInternal(attrs, hostWindow)) > { > + UNUSED_PARAM(renderDirectlyToHostWindow); instead of using UNUSED_PARAM() here, just don't name the parameter
Kenneth Russell
Comment 4 2010-09-16 18:45:41 PDT
(In reply to comment #3) > (From update of attachment 67871 [details]) > Looks good, just a few nits to address before landing: > > - this patch uses UNUSED_PARAM() many times when it doesn't need to. If all permutations of a function will not use a parameter you can just avoid naming the parameter in the .cpp. UNUSED_PARAM is more for this case: > > void foo(int parameter) { > #ifdef SOMETHING > bar(parameter); > #else > UNUSED_PARAM(parameter); > #endif > } I didn't realize this worked for function definitions as well as declarations. Will fix upon landing. Removing the inclusion of UNUSED_PARAM will also fix the style violation. > Also, consider changing renderDirectlyToHostWindow from a bool to an enum. Good idea; will do. > View in context: https://bugs.webkit.org/attachment.cgi?id=67871&action=prettypatch > > > WebCore/platform/graphics/mac/GraphicsContext3DMac.mm:103 > > + UNUSED_PARAM(renderDirectlyToHostWindow); > > instead of UNUSED_PARAM() just don't name the parameter Will do. > > WebCore/platform/graphics/qt/GraphicsContext3DQt.cpp:506 > > +GraphicsContext3D::GraphicsContext3D(GraphicsContext3D::Attributes attrs, HostWindow* hostWindow, bool renderDirectlyToHostWindow) > > : m_internal(new GraphicsContext3DInternal(attrs, hostWindow)) > > { > > + UNUSED_PARAM(renderDirectlyToHostWindow); > > instead of using UNUSED_PARAM() here, just don't name the parameter Will do.
Kenneth Russell
Comment 5 2010-09-16 18:53:58 PDT
Chris Marrin
Comment 6 2010-09-17 05:13:31 PDT
Comment on attachment 67871 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=67871&action=prettypatch I really think it is a mistake to commit this change in its current state. Please consider backing it out and following my recommendations. > WebCore/platform/graphics/GraphicsContext3D.h:772 > + I don't like having explicit 'supports' calls like this. If you want to start implementing an extension mechanism, you should do it the way we've defined it in WebGL. Make a base class called Extension3D or something and add a getExtension() call which returns one. Then make a concrete subclass for your extension and add the appropriate calls to that. You can then cast the return from getExtension to this subclass and use it. It would probably be best to not keep a reference to the GC3D in this class to avoid lifetime issues. You can just have the API pass it for each call. You should also be consistent about using the GL_ prefix with all your extensions. >>> WebCore/platform/graphics/mac/GraphicsContext3DMac.mm:103 >>> + UNUSED_PARAM(renderDirectlyToHostWindow); >> >> instead of UNUSED_PARAM() just don't name the parameter > > Will do. I disagree. This is the right way to handle this, especially since this is a boolean. It's better for documentation. >>> WebCore/platform/graphics/qt/GraphicsContext3DQt.cpp:506 >>> + UNUSED_PARAM(renderDirectlyToHostWindow); >> >> instead of using UNUSED_PARAM() here, just don't name the parameter > > Will do. Again, doing it this way is better for documentation.
Kenneth Russell
Comment 7 2010-09-17 10:09:32 PDT
(In reply to comment #6) > (From update of attachment 67871 [details]) > View in context: https://bugs.webkit.org/attachment.cgi?id=67871&action=prettypatch > > I really think it is a mistake to commit this change in its current state. Please consider backing it out and following my recommendations. > > > WebCore/platform/graphics/GraphicsContext3D.h:772 > > + > > I don't like having explicit 'supports' calls like this. If you want to start implementing an extension mechanism, you should do it the way we've defined it in WebGL. Make a base class called Extension3D or something and add a getExtension() call which returns one. Then make a concrete subclass for your extension and add the appropriate calls to that. You can then cast the return from getExtension to this subclass and use it. It would probably be best to not keep a reference to the GC3D in this class to avoid lifetime issues. You can just have the API pass it for each call. I really don't like this suggested structure. For WebGL we made a deliberate decision to expose OpenGL extensions in an object-oriented fashion, but to do it cleanly at the GraphicsContext3D level would really require both Extension3D and GraphicsContext3D to be RefCounted, and Extension3D objects to use a RefPtr to refer to their GraphicsContext3D. Forcing all of the API entry points of the Extension3D subclasses to take a GraphicsContext3D as the first argument as a hack to avoid making everything RefCounted is just gross. WebGLRenderingContext is reference counted, as will be its extension objects. Having them bottom out into explicitly and easily callable entry points on GraphicsContext3D makes implementing them much simpler. Adding Extension3D classes at this level is going to add a *lot* of boilerplate code, duplicated per platform where the extensions are implemented per platform. > You should also be consistent about using the GL_ prefix with all your extensions. I don't understand this comment. GraphicsContext3D stripped the gl prefix to function names and GL_ prefix to enum values. The newly added enums here come from desktop OpenGL which didn't use a vendor suffix, which is why they're READ_ONLY rather than READ_ONLY_CHROMIUM, etc. The new extension entry points use a vendor suffix just like they would in OpenGL for C. > >>> WebCore/platform/graphics/mac/GraphicsContext3DMac.mm:103 > >>> + UNUSED_PARAM(renderDirectlyToHostWindow); > >> > >> instead of UNUSED_PARAM() just don't name the parameter > > > > Will do. > > I disagree. This is the right way to handle this, especially since this is a boolean. It's better for documentation. The WebKit style guide is mum on this issue but a quick grep through the source tree does show, as James pointed out, that UNUSED_PARAM is in many places used in conjunction with #ifdefs. Also, this parameter is no longer a boolean but an enum. I prefer the current structure as it saves a lot of lines of code. > >>> WebCore/platform/graphics/qt/GraphicsContext3DQt.cpp:506 > >>> + UNUSED_PARAM(renderDirectlyToHostWindow); > >> > >> instead of using UNUSED_PARAM() here, just don't name the parameter > > > > Will do. > > Again, doing it this way is better for documentation.
Darin Adler
Comment 8 2010-09-17 10:11:16 PDT
(In reply to comment #6) > >>> WebCore/platform/graphics/mac/GraphicsContext3DMac.mm:103 > >>> + UNUSED_PARAM(renderDirectlyToHostWindow); > >> > >> instead of UNUSED_PARAM() just don't name the parameter > > > > Will do. > > I disagree. This is the right way to handle this, especially since this is a boolean. It's better for documentation. Chris, I agree with James on this and disagree with you. This is not the WebKit style way to handle it. We want to avoid using the UNUSED_PARAM macro in almost every case. It’s both inelegant and doesn’t work. You can say UNUSED_PARAM and then go ahead and use the parameter, which is not good! If we feel the need to name an parameter for clarity then we can put it in /* */ comments at the top of the function definition. I think this clarity is what you mean by “documentation”. In many cases, having the name of the parameter does not increase clarity, but the technique with the /* */ comment works fine for cases where it does.
Chris Marrin
Comment 9 2010-09-17 10:48:48 PDT
(In reply to comment #7) > (In reply to comment #6) > > (From update of attachment 67871 [details] [details]) > > View in context: https://bugs.webkit.org/attachment.cgi?id=67871&action=prettypatch > > > > I really think it is a mistake to commit this change in its current state. Please consider backing it out and following my recommendations. > > > > > WebCore/platform/graphics/GraphicsContext3D.h:772 > > > + > > > > I don't like having explicit 'supports' calls like this. If you want to start implementing an extension mechanism, you should do it the way we've defined it in WebGL. Make a base class called Extension3D or something and add a getExtension() call which returns one. Then make a concrete subclass for your extension and add the appropriate calls to that. You can then cast the return from getExtension to this subclass and use it. It would probably be best to not keep a reference to the GC3D in this class to avoid lifetime issues. You can just have the API pass it for each call. > > I really don't like this suggested structure. For WebGL we made a deliberate decision to expose OpenGL extensions in an object-oriented fashion, but to do it cleanly at the GraphicsContext3D level would really require both Extension3D and GraphicsContext3D to be RefCounted, and Extension3D objects to use a RefPtr to refer to their GraphicsContext3D. Forcing all of the API entry points of the Extension3D subclasses to take a GraphicsContext3D as the first argument as a hack to avoid making everything RefCounted is just gross. > > WebGLRenderingContext is reference counted, as will be its extension objects. Having them bottom out into explicitly and easily callable entry points on GraphicsContext3D makes implementing them much simpler. Adding Extension3D classes at this level is going to add a *lot* of boilerplate code, duplicated per platform where the extensions are implemented per platform. I understand you don't like my design proposal, but we have to do something different from what you've checked in. We can't have functions like mapBufferSubDataCHROMIUM() in the cross-platform API of GraphicsContext3D. That's a function that is specific to your implementation needs and should be in platform dependent code. In fact, it is actually implemented in your GraphicsContext3DInternal object, so why not just give access to that? I'm all for formalizing the extension mechanism, but doing it by constantly expanding the GC3D API will eventually be crushed under its own weight. Here's another idea. What if we had a single Extension3D object, returned as a const pointer from a getExtensions() function in GC3D. The concrete implementation of this object would be platform specific, so you'd do something like: static_cast<const Extensions3DChromium*>(ctx->getExtensions())->mapBufferSubData(...); The Extensions3D class could have its own API for the extensions common to all platforms. GC3D could also have a hasExtension(string) function to test whether a particular implementation had a particular extension. Since you wouldn't ever keep a persistent reference to the Extensions3D object, its lifetime would match that of GC3D, and it could keep a GC3D pointer to make it easy to implement the extension API. How would that work? I don't follow you. > > > You should also be consistent about using the GL_ prefix with all your extensions. > > I don't understand this comment. GraphicsContext3D stripped the gl prefix to function names and GL_ prefix to enum values. The newly added enums here come from desktop OpenGL which didn't use a vendor suffix, which is why they're READ_ONLY rather than READ_ONLY_CHROMIUM, etc. The new extension entry points use a vendor suffix just like they would in OpenGL for C. I'm just referring to the fact that you use EXT_texture_format_BGRA8888 and GL_CHROMIUM_map_sub. You should use GL_ on both or neither.
Kenneth Russell
Comment 10 2010-09-29 18:16:19 PDT
(In reply to comment #9) > (In reply to comment #7) > > (In reply to comment #6) > > > (From update of attachment 67871 [details] [details] [details]) > > > View in context: https://bugs.webkit.org/attachment.cgi?id=67871&action=prettypatch > > > > > > I really think it is a mistake to commit this change in its current state. Please consider backing it out and following my recommendations. > > > > > > > WebCore/platform/graphics/GraphicsContext3D.h:772 > > > > + > > > > > > I don't like having explicit 'supports' calls like this. If you want to start implementing an extension mechanism, you should do it the way we've defined it in WebGL. Make a base class called Extension3D or something and add a getExtension() call which returns one. Then make a concrete subclass for your extension and add the appropriate calls to that. You can then cast the return from getExtension to this subclass and use it. It would probably be best to not keep a reference to the GC3D in this class to avoid lifetime issues. You can just have the API pass it for each call. > > > > I really don't like this suggested structure. For WebGL we made a deliberate decision to expose OpenGL extensions in an object-oriented fashion, but to do it cleanly at the GraphicsContext3D level would really require both Extension3D and GraphicsContext3D to be RefCounted, and Extension3D objects to use a RefPtr to refer to their GraphicsContext3D. Forcing all of the API entry points of the Extension3D subclasses to take a GraphicsContext3D as the first argument as a hack to avoid making everything RefCounted is just gross. > > > > WebGLRenderingContext is reference counted, as will be its extension objects. Having them bottom out into explicitly and easily callable entry points on GraphicsContext3D makes implementing them much simpler. Adding Extension3D classes at this level is going to add a *lot* of boilerplate code, duplicated per platform where the extensions are implemented per platform. > > I understand you don't like my design proposal, but we have to do something different from what you've checked in. We can't have functions like mapBufferSubDataCHROMIUM() in the cross-platform API of GraphicsContext3D. That's a function that is specific to your implementation needs and should be in platform dependent code. In fact, it is actually implemented in your GraphicsContext3DInternal object, so why not just give access to that? That suggestion doesn't work in the Chromium port. Chromium's GraphicsContext3D implementation needs to bridge out to code in the surrounding browser, and the GraphicsContext3DInternal needs to refer to types in Chromium's WebKit API (WebKit/chromium/public). These references can't be made from WebCore. > I'm all for formalizing the extension mechanism, but doing it by constantly expanding the GC3D API will eventually be crushed under its own weight. I think this is an exaggeration. At least for Chromium we have no desire or intention to add more extensions and in fact aim to remove GL_CHROMIUM_copy_texture_to_parent_texture as soon as possible. In the WebGL working group discussions there are only a fairly small number of OpenGL ES 2.0 extensions that are under consideration for addition. > Here's another idea. What if we had a single Extension3D object, returned as a const pointer from a getExtensions() function in GC3D. The concrete implementation of this object would be platform specific, so you'd do something like: > > static_cast<const Extensions3DChromium*>(ctx->getExtensions())->mapBufferSubData(...); > > The Extensions3D class could have its own API for the extensions common to all platforms. GC3D could also have a hasExtension(string) function to test whether a particular implementation had a particular extension. I agree that we should probably get rid of the explicit supportsXYZ() calls in favor of hasExtension(String). > Since you wouldn't ever keep a persistent reference to the Extensions3D object, its lifetime would match that of GC3D, and it could keep a GC3D pointer to make it easy to implement the extension API. > > How would that work? It's possible to make this work. Doing so will require adding Extensions3D (which needs a per-port create() function), Extensions3DChromium, associated Internal classes, etc. Extensions3D will need a weak reference to GraphicsContext3D and friend declarations will need to be made to allow the extension implementation to refer to the GraphicsContext3D internals. There will be a fair amount of replicated code per platform, as there is in GraphicsContext3D. I'm willing to do the work but don't really believe it is worth the effort -- or, at least, I think there is higher priority work to be done toward finishing the WebGL implementation. If you feel strongly about it then I will file another bug to do this refactoring and assign it to myself. > > > You should also be consistent about using the GL_ prefix with all your extensions. > > > > I don't understand this comment. GraphicsContext3D stripped the gl prefix to function names and GL_ prefix to enum values. The newly added enums here come from desktop OpenGL which didn't use a vendor suffix, which is why they're READ_ONLY rather than READ_ONLY_CHROMIUM, etc. The new extension entry points use a vendor suffix just like they would in OpenGL for C. > > I'm just referring to the fact that you use EXT_texture_format_BGRA8888 and GL_CHROMIUM_map_sub. You should use GL_ on both or neither. I see. Depending on the decision of how to handle these extension functions I'll either do this cleanup in GraphicsContext3D or Extensions3D.
Chris Marrin
Comment 11 2010-09-30 06:48:28 PDT
(In reply to comment #10) > ... > > > I'm all for formalizing the extension mechanism, but doing it by constantly expanding the GC3D API will eventually be crushed under its own weight. > > I think this is an exaggeration. At least for Chromium we have no desire or intention to add more extensions and in fact aim to remove GL_CHROMIUM_copy_texture_to_parent_texture as soon as possible. In the WebGL working group discussions there are only a fairly small number of OpenGL ES 2.0 extensions that are under consideration for addition. It's not an exaggeration if you think about how this will look 5 years down the line. And what if someone wants to do some experimentation, adding some crazy extension for a single platform. Adding that to GraphicsContext3D will be cumbersome at best. I really think we need a better mechanism. > > > Here's another idea. What if we had a single Extension3D object, returned as a const pointer from a getExtensions() function in GC3D. The concrete implementation of this object would be platform specific, so you'd do something like: > > > > static_cast<const Extensions3DChromium*>(ctx->getExtensions())->mapBufferSubData(...); > > > > The Extensions3D class could have its own API for the extensions common to all platforms. GC3D could also have a hasExtension(string) function to test whether a particular implementation had a particular extension. > > I agree that we should probably get rid of the explicit supportsXYZ() calls in favor of hasExtension(String). > > > Since you wouldn't ever keep a persistent reference to the Extensions3D object, its lifetime would match that of GC3D, and it could keep a GC3D pointer to make it easy to implement the extension API. > > > > How would that work? > > It's possible to make this work. Doing so will require adding Extensions3D (which needs a per-port create() function), Extensions3DChromium, associated Internal classes, etc. Extensions3D will need a weak reference to GraphicsContext3D and friend declarations will need to be made to allow the extension implementation to refer to the GraphicsContext3D internals. There will be a fair amount of replicated code per platform, as there is in GraphicsContext3D. I'm willing to do the work but don't really believe it is worth the effort -- or, at least, I think there is higher priority work to be done toward finishing the WebGL implementation. If you feel strongly about it then I will file another bug to do this refactoring and assign it to myself. I feel strongly that going further down this path is a mistake. Adding hackery for expediency just incurs greater cost down the road. We not only have to fix the problem, but the memory of why it was done in the first place has faded, so it's more expensive to figure out the right fix. I've opened a new bug for this: https://bugs.webkit.org/show_bug.cgi?id=46894. Let's continue the conversation there. The proposal I made there is different in that it only has a single Extension3D object which all extensions for a given platform hang off of. That will make the platform specific code less cumbersome. And since many extensions will be exposing OpenGL functionality, we can put a lot of this in a single Extension3DOpenGL subclass which can be shared by any platform using OpenGL (or maybe even ANGLE). Let me know what you think...
Note You need to log in before you can comment on or make changes to this bug.