| Summary: | [GTK] Software-only basic compositing | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | WebKit | Reporter: | Emanuele Aina <emanuele.aina> | ||||||
| Component: | WebKitGTK | Assignee: | Nobody <webkit-unassigned> | ||||||
| Status: | RESOLVED WONTFIX | ||||||||
| Severity: | Normal | CC: | andrunko, cgarcia, changseok, clopez, gustavo, mcatanzaro, mrobinson, philip.chimento, yoon, zan | ||||||
| Priority: | P3 | ||||||||
| Version: | 528+ (Nightly build) | ||||||||
| Hardware: | PC | ||||||||
| OS: | Linux | ||||||||
| Attachments: |
|
||||||||
|
Description
Emanuele Aina
2015-07-24 01:45:08 PDT
From what I can see, the only thing that uses GL directly with Coordinated Graphics is ThreadedCompositor.cpp, so I guess I should start there, replacing GL usage with GraphicsContext. Yoon, if you have any moment to spare, any suggestion would be welcome. :) (In reply to comment #1) > From what I can see, the only thing that uses GL directly with Coordinated > Graphics is ThreadedCompositor.cpp, so I guess I should start there, > replacing GL usage with GraphicsContext. > > Yoon, if you have any moment to spare, any suggestion would be welcome. :) Plz blame my laziness. I fixed build of threaded compositor. Anyway, It looks like I couldn't understand your use cases properly. Why not using GL accelerated compositing in RPi2? AFAIK, it has enough hardware power. If you want to use threaded-compositor with TextureMapperImageBuffer, You don't have to do lots of things. just do not use ensureGLContext and glContext()->swapBuffers(); and make a GraphicsContext from ThreadedCompositor:m_nativeSurfaceHandle and pass it to TextureMapperImageBuffer implementation. However, I think it is enough to use HW accelerated compositing in RPi2. > Plz blame my laziness. I fixed build of threaded compositor. Ah ah, no worries, I just picked an unfortunate timing. :) > Anyway, It looks like I couldn't understand your use cases properly. Why not using GL accelerated compositing in RPi2? AFAIK, it has enough hardware power. The current (closed) GL stack has been deemed not reliable enough for WebKit. It works well enough for the limited usage seen in Kodi/XBMC, but WebKit may stress it too much and it has been decided that we will need to get away without GL until the new open stack based on Mesa will be viable. > If you want to use threaded-compositor with TextureMapperImageBuffer, > You don't have to do lots of things. > just do not use ensureGLContext and glContext()->swapBuffers(); > and make a GraphicsContext from ThreadedCompositor:m_nativeSurfaceHandle and pass it to TextureMapperImageBuffer implementation. Indeed that was enough to get the threaded-compositor to work without GL when not actually compositing layers, now I've got position:fixed elements on their own layers (but with a lot of flickering probably due to overdraw) and will look at <video> elements once I have a clear understanding of what causes the flickering. Thanks! Just a quick update: I've currently got it running with opaque position:fixed elements, I still need to fix elements with alpha, clean up the hugely messy patch and after that I will start looking at <video>. Created attachment 262186 [details]
Patch
Created attachment 262187 [details]
Reintroduce TextureMapperImageBuffer
I submitted the current work in progress rough patch to have some sort of software only compositing using the threaded compositor.
The patch needs the reintroduction of TextureMapperImageBuffer to be applied first, should I upload it as a separate bug?
Comments on the general approach would be very welcome. :)
(In reply to comment #3) > > Plz blame my laziness. I fixed build of threaded compositor. > > Ah ah, no worries, I just picked an unfortunate timing. :) > > > Anyway, It looks like I couldn't understand your use cases properly. Why not using GL accelerated compositing in RPi2? AFAIK, it has enough hardware power. > > The current (closed) GL stack has been deemed not reliable enough for > WebKit. It works well enough for the limited usage seen in Kodi/XBMC, but > WebKit may stress it too much and it has been decided that we will need to > get away without GL until the new open stack based on Mesa will be viable. > Can you roughly estimate how long this software compositing backend would have to be maintained in trunk, i.e. how long until you'd be able to switch back to OpenGL-based implementation? For the last few years the desire was to move over completely towards depending on hardware acceleration for compositing, which is why TextureMapperImageBuffer was removed in the first place. My current understanding is that we're still one or two year from having a GL implementation we can rely on. The current patch didn't turn out to be extremely invasive: I guess it can also benefit from a couple of refinements to further lower its impact if the approach chosen seems sensible. Interest in this a dried up, and hopefully some sort of usable, free GL support for the Raspberry Pi should be available soon. Closing. Comment on attachment 262186 [details]
Patch
At the web engines hackfest, you mentioned that you're no longer interested in software-only compositing. For a better future....
|