I misunderstood UsageValidationRules.md; resource usages need to be checked at pass encoding time to e.g. ensure that a GPUBuffer is not bound as both a STORAGE and VERTEX buffer within a single render pass. Across separate passes, such a GPUBuffer can be used as either.
Necessary to make compute demos easier to write.
<rdar://problem/54419993>
This patch requires a couple follow up patches: 1) GPURender/ComputePipeline classes should follow the hierarchy used here. GPUErrorScopes exist for the sake of the WebGPU API, not the internal classes, and are reported at the top level, so the "WebGPU" object should own the reference. Also makes implementing the rest of "GPUObjectBase" easier to match the spec. 2) The same relaxation of usage rules validation should be applied to GPUTexture creation. 3) GPUProgrammablePassEncoder.setBindGroup, GPURenderPassEncoder.setVertexBuffers, and GPURenderPassEncoder.setIndexBuffer need to enforce the validation rules outlined by https://github.com/gpuweb/gpuweb/blob/master/design/UsageValidationRules.md (and soon in the spec). Will link bugs after they're created.
Created attachment 377394 [details] Patch
Comment on attachment 377394 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=377394&action=review > Source/WebCore/platform/graphics/gpu/GPUBuffer.h:99 > + void registerMappingCallback(MappingCallback&&, bool, GPUErrorScopes&); > + void unmap(GPUErrorScopes*); > + void destroy(GPUErrorScopes*); Why isn't the GPUErrorScope attached to the buffer itself? Or do different map and unmap calls have different scopes? Also, why are these pointers? I see it is because destroy can be called with a nullptr from the destructor, but maybe we should have a default error scope that always exists.
(In reply to Dean Jackson from comment #5) > Comment on attachment 377394 [details] > Patch > > View in context: > https://bugs.webkit.org/attachment.cgi?id=377394&action=review > > > Source/WebCore/platform/graphics/gpu/GPUBuffer.h:99 > > + void registerMappingCallback(MappingCallback&&, bool, GPUErrorScopes&); > > + void unmap(GPUErrorScopes*); > > + void destroy(GPUErrorScopes*); > > Why isn't the GPUErrorScope attached to the buffer itself? Or do different > map and unmap calls have different scopes? > > Also, why are these pointers? I see it is because destroy can be called with > a nullptr from the destructor, but maybe we should have a default error > scope that always exists. GPUErrorScopes is owned by the WebGPUBuffer via GPUObjectBase. Both WebGPUBuffer and GPUBuffer methods need access to it, but I chose to do this over having both classes Ref it. My reasoning was influenced by the second point: the destructor should call the same cleanup functions, but shouldn't "throw exceptions" that would expose GC timing to JavaScript, so it isn't necessary for GPUBuffer to own a ref to GPUErrorScopes, since all errors that can happen at this layer happen because a WebGPUBuffer method was called first.
Created attachment 377400 [details] Patch for landing
Comment on attachment 377400 [details] Patch for landing Clearing flags on attachment: 377400 Committed r249183: <https://trac.webkit.org/changeset/249183>
All reviewed patches have been landed. Closing bug.
Follow up bugs: 1) https://bugs.webkit.org/show_bug.cgi?id=201207 2) https://bugs.webkit.org/show_bug.cgi?id=201205 3) https://bugs.webkit.org/show_bug.cgi?id=201208, https://bugs.webkit.org/show_bug.cgi?id=201209, https://bugs.webkit.org/show_bug.cgi?id=201210