RESOLVED FIXED 220649
[GPU Process] Make getImageData use display list with shared memory instead of sending a sync IPC
https://bugs.webkit.org/show_bug.cgi?id=220649
Summary [GPU Process] Make getImageData use display list with shared memory instead o...
Ryosuke Niwa
Reported 2021-01-15 00:24:10 PST
Improve the latency of getImageData by turning it into a display list command and getting the output with shared memory.
Attachments
WIP (35.37 KB, patch)
2021-01-15 00:24 PST, Ryosuke Niwa
no flags
Ryosuke's WIP (+ hang fix) (39.81 KB, patch)
2021-01-18 16:31 PST, Wenson Hsieh
no flags
Updated to use semaphore (38.59 KB, patch)
2021-01-28 01:28 PST, Ryosuke Niwa
no flags
WIP (37.80 KB, patch)
2021-03-09 21:26 PST, Ryosuke Niwa
no flags
Ryosuke Niwa
Comment 1 2021-01-15 00:24:44 PST
Ryosuke Niwa
Comment 2 2021-01-15 00:26:53 PST
With this WIP patch, we can sometimes get stuck in RemoteImageBufferProxy<WebKit::ImageBufferShareableMappedIOSurfaceBackend>::getImageData forever. As far as I can tell, we never wake up the GPU Process. Wenson, am I doing something wrong here in RemoteImageBufferProxy<WebKit::ImageBufferShareableMappedIOSurfaceBackend>::getImageData by sending flushDrawingContextAsync?
Wenson Hsieh
Comment 3 2021-01-18 16:31:00 PST
Created attachment 417852 [details] Ryosuke's WIP (+ hang fix)
Wenson Hsieh
Comment 4 2021-01-18 16:45:56 PST
(In reply to Ryosuke Niwa from comment #2) > With this WIP patch, we can sometimes get stuck in > RemoteImageBufferProxy<WebKit::ImageBufferShareableMappedIOSurfaceBackend>:: > getImageData forever. As far as I can tell, we never wake up the GPU Process. > > Wenson, am I doing something wrong here in > RemoteImageBufferProxy<WebKit::ImageBufferShareableMappedIOSurfaceBackend>:: > getImageData by sending flushDrawingContextAsync? The previous patch was hanging in the case where the wakeup message is sent for some destination image buffer A, but a sync GetImageData is later appended for a different destination image buffer B. In this case, since we're directly appending the item to the display list, we bypass the DisplayList::Recorder delegate methods, which call out to the recorder client (in this case, RemoteImageBufferProxy::willAppendItemOfType), which is what ultimately allows us to append "change destination image buffer" items when synchronizing display list items between multiple destination image buffers. As a result, the GPUP (in RemoteRenderingBackend) never learned that it needed to switch from image buffer A to image buffer B, and thus never processed the GetImageData. To fix this, I added a DisplayList::Recorder method for getImageData that calls into the recorder client hooks, and used that instead of the DisplayList method in RemoteImageBufferProxy::getImageData. MotionMark Images seems to run smoothly after this adjustment.
Radar WebKit Bug Importer
Comment 5 2021-01-22 00:25:13 PST
Ryosuke Niwa
Comment 6 2021-01-28 01:28:44 PST
Created attachment 418630 [details] Updated to use semaphore
Said Abou-Hallawa
Comment 7 2021-01-28 11:11:23 PST
Comment on attachment 418630 [details] Updated to use semaphore View in context: https://bugs.webkit.org/attachment.cgi?id=418630&action=review > Source/WebCore/platform/graphics/ImageBuffer.h:131 > + virtual void copyImageData(void*, AlphaPremultiplication outputFormat, const IntRect& srcRect) const = 0; It is unclear whether this ImageBuffer is the source or the destination of the copy operation. I would suggest make it an override of getImageData(). Also this call looks insecure since we do not know the size off the "void*" buffer. We use "void *" when calling private methods like copyImagePixels(). But these calls for buffers that we allocate or we sure they are sufficient for the data to be be copied to or from. I would suggest passing the number of bytes of the destination buffer. > Source/WebCore/platform/graphics/ImageBufferBackend.cpp:262 > +void ImageBufferBackend::copyImageData(void* destData, AlphaPremultiplication outputFormat, const IntRect& srcRect, void* srcData) const ImageBufferBackend::getImageData() above can be written in in terms of this function. RefPtr<ImageData> ImageBufferBackend::getImageData(AlphaPremultiplication outputFormat, const IntRect& srcRect, void* data) const { auto imageData = ImageData::create(srcRectScaled.size()); copyImageData(imageData->data()->data(), outputFormat, srcRect, data); return imageData; } > Source/WebCore/platform/graphics/cairo/ImageBufferCairoSurfaceBackend.cpp:128 > +RefPtr<ImageData> ImageBufferCairoSurfaceBackend::copyImageData(void* buffer, AlphaPremultiplication outputFormat, const IntRect& srcRect) const The return of this function is "void". And its prototype is not defined in the file ImageBufferCairoSurfaceBackend.h > Source/WebKit/GPUProcess/graphics/RemoteRenderingBackend.cpp:304 > +void RemoteRenderingBackend::getImageData(const WebCore::IntSize& size, CompletionHandler<void(const SharedMemory::IPCHandle& handle)>&& completionHandler) No need to WebCore:: in "onst WebCore::IntSize& size". Also what is the use of this function now after deleting the ImageBuffer renderingResourceIdentifier and the outputFormat? What does it exactly return in the completionHandler? > Source/WebKit/WebProcess/GPU/graphics/RemoteRenderingBackendProxy.cpp:171 > + IntSize newSize = m_imageDataBufferSize.expandedTo(desiredSize); > + SharedMemory::IPCHandle handle; > + sendSync(Messages::RemoteRenderingBackend::GetImageData(newSize), Messages::RemoteRenderingBackend::GetImageData::Reply(handle), m_renderingBackendIdentifier, 1_s); > + > + m_imageDataBuffer = SharedMemory::map(handle.handle, SharedMemory::Protection::ReadWrite); > + m_imageDataBufferSize = newSize; Can't we make managing the SharedMemory be handled completely by WebP? I think this scenario might work: RemoteRenderingBackendProxy::getImageData(): Ensures the size of m_imageDataBuffer is sufficient for the request image data Records getImageData() and passes the handle of m_imageDataBuffer in the DL item waitForImageData(). Creates an ImageData and copies from m_imageDataBuffer to this new ImageData. RemoteRenderingBackend::getImageData() Maps the handle of the SharedMemory in GetImageData to a SharedMemory Calls copyImageData() Signal the semaphore. > Source/WebKit/WebProcess/GPU/graphics/RemoteRenderingBackendProxy.h:134 > + RefPtr<SharedMemory> m_imageDataBuffer; > + WebCore::IntSize m_imageDataBufferSize; Why do we need to store the size of the SharedMemory in m_imageDataBufferSize? Can't we use m_imageDataBuffer->size() instead?
Ryosuke Niwa
Comment 8 2021-02-12 23:26:56 PST
(In reply to Said Abou-Hallawa from comment #7) > Comment on attachment 418630 [details] > Updated to use semaphore > > View in context: > https://bugs.webkit.org/attachment.cgi?id=418630&action=review > > > Source/WebCore/platform/graphics/ImageBuffer.h:131 > > + virtual void copyImageData(void*, AlphaPremultiplication outputFormat, const IntRect& srcRect) const = 0; > > It is unclear whether this ImageBuffer is the source or the destination of > the copy operation. I would suggest make it an override of getImageData(). I'd rather avoid function overrides as they could be confusing. I'd probably call this function copyImageDataToBuffer or something. > Also this call looks insecure since we do not know the size off the "void*" > buffer. We use "void *" when calling private methods like copyImagePixels(). > But these calls for buffers that we allocate or we sure they are sufficient > for the data to be be copied to or from. I would suggest passing the number > of bytes of the destination buffer. Well, you reviewed this way too early. There is a reason I didn't put this up for review. > > Source/WebKit/GPUProcess/graphics/RemoteRenderingBackend.cpp:304 > > +void RemoteRenderingBackend::getImageData(const WebCore::IntSize& size, CompletionHandler<void(const SharedMemory::IPCHandle& handle)>&& completionHandler) > > No need to WebCore:: in "onst WebCore::IntSize& size". > > Also what is the use of this function now after deleting the ImageBuffer > renderingResourceIdentifier and the outputFormat? What does it exactly > return in the completionHandler? We need toe IPC to get a new shared memory buffer when the size of getImageData is larger than the current buffer size or the shared memory hasn't been allocated yet. I just re-used the old IPC to implement that. Again, a pre-emptive code review :) > > Source/WebKit/WebProcess/GPU/graphics/RemoteRenderingBackendProxy.cpp:171 > > + IntSize newSize = m_imageDataBufferSize.expandedTo(desiredSize); > > + SharedMemory::IPCHandle handle; > > + sendSync(Messages::RemoteRenderingBackend::GetImageData(newSize), Messages::RemoteRenderingBackend::GetImageData::Reply(handle), m_renderingBackendIdentifier, 1_s); > > + > > + m_imageDataBuffer = SharedMemory::map(handle.handle, SharedMemory::Protection::ReadWrite); > > + m_imageDataBufferSize = newSize; > > Can't we make managing the SharedMemory be handled completely by WebP? I > think this scenario might work: > > RemoteRenderingBackendProxy::getImageData(): > Ensures the size of m_imageDataBuffer is sufficient for the request > image data > Records getImageData() and passes the handle of m_imageDataBuffer in the > DL item > waitForImageData(). No, you can't pass a shared memory in display list. There is no mechanism to share a new shared memory by sharing some bytes in a shared memory. We need to send it as a mach port attachment via IPC. We also want to allocate this memory in GPU Process so that only GPU process will have read-write access to the memory while WebContent will have readonly access to it. There is an alternative design which is for GPU content process to send a new shared memory buffer when it detects that the newly requested getImageData doesn't fit in the existing buffer but this is kind of tricky because then Web content process will now have to wait for both the semaphore and potentially an IPC to arrive. In practice, this would mean that there needs to be some data indicating that the new IPC is coming and then WebContent has to send a sync IPC to GPU Process. This will be a lot of back & forth between GPU & WebContent processes, which will incur extra overhead. > > Source/WebKit/WebProcess/GPU/graphics/RemoteRenderingBackendProxy.h:134 > > + RefPtr<SharedMemory> m_imageDataBuffer; > > + WebCore::IntSize m_imageDataBufferSize; > > Why do we need to store the size of the SharedMemory in > m_imageDataBufferSize? Can't we use m_imageDataBuffer->size() instead? This was really for expedience to get things going but you're right that we don't really need it.
Ryosuke Niwa
Comment 9 2021-03-09 21:26:09 PST
Myles C. Maxfield
Comment 10 2021-03-29 14:50:40 PDT
Closing because the "Depends on" bugs are all fixed.
Note You need to log in before you can comment on or make changes to this bug.