When WebXRWebGLLayer creates the WebXROpaqueFramebuffer, it asks the session for the recommended framebuffer size. It then multiplies the width by 2 - I assume because there are two eyes. However, the specification [1] says that it is a "best estimate of the WebGL framebuffer resolution large enough to contain all of the session’s XRViews". So it should be the session that makes account for the multiple views, not the framebuffer. Note also: "best estimate". Currently the dimensions of the framebuffer are set once as the WebGLLayer is created. I think it should be checked each frame. For example, the headset might be under load and start providing smaller textures for rendering.
<rdar://problem/78638309>
Created attachment 430080 [details] Patch
Committed r278255 (238292@main): <https://commits.webkit.org/238292@main>
> > Note also: "best estimate". Currently the dimensions of the framebuffer are > set once as the WebGLLayer is created. I think it should be checked each > frame. For example, the headset might be under load and start providing > smaller textures for rendering. The spec supports Dynamic viewport scaling. The benefit is that you can change the resolution on a per-frame basis without reallocating the WebGLLayer framebuffer. Right now we return nullop in WebXRView::recommendedViewportScale, so always the full viewport is used. If you have a a heuristic to determine the recommended scale per frame you can implement it already on the cocoa platform. For this to work the underlying SDK should support setting the UV rect when submitting the frame to the headset.