Bug 294995
| Summary: | [WPE] Memory-mapped GPU buffers fail under DRM platform | ||
|---|---|---|---|
| Product: | WebKit | Reporter: | Nikolas Zimmermann <zimmermann> |
| Component: | WPE WebKit | Assignee: | Nikolas Zimmermann <zimmermann> |
| Status: | RESOLVED FIXED | ||
| Severity: | Normal | CC: | bugs-noreply |
| Priority: | P2 | ||
| Version: | WebKit Nightly Build | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
Nikolas Zimmermann
When using the DRM platform (new API) memory-mapped GPU buffers on i.MX platforms with split GPU/IPU failed. The DRM platform implementation selects the first KMS-capable DRM node (e.g. /dev/dri/card1, with no corresponding render node -> a display controller) as DRM device to allocate buffers for display purposes (which is correct). However we asked that DRM node for its render node, which is non-existant, and thus /dev/dri/card1 was also passed on as 'drmRenderNode' to DRMDeviceManager, which in turn is used througout WebCore for buffer allocation purposes, where we want to allocate buffers that are _not_ displayed, but used within rendering -- in that case we should be using the GPU (here: /dev/dri/card0, /dev/dri/renderD128). The main device (GPU) and the target device (IPU, display controller) might differ. The Wayland platform handles this correctly by listening to `main_device` and `tranche_target_device' linux-dmabuf-v1 events. The GPU is correctly used as 'main device', whereas the display-controller is used as 'target device' -- fix DRM to behave the same.
| Attachments | ||
|---|---|---|
| Add attachment proposed patch, testcase, etc. |
Nikolas Zimmermann
Pull request: https://github.com/WebKit/WebKit/pull/47208
EWS
Committed 297131@main (e3ab4a2e6678): <https://commits.webkit.org/297131@main>
Reviewed commits have been landed. Closing PR #47208 and removing active labels.