RESOLVED FIXED 211882
[WebGPU] Validation for GPUDevice.createTexture()
https://bugs.webkit.org/show_bug.cgi?id=211882
Summary [WebGPU] Validation for GPUDevice.createTexture()
Myles C. Maxfield
Reported 2020-05-13 23:11:59 PDT
[WebGPU] Validation for GPUDevice.createTexture()
Attachments
Patch (19.80 KB, patch)
2020-05-13 23:14 PDT, Myles C. Maxfield
no flags
Patch (18.92 KB, patch)
2020-05-13 23:18 PDT, Myles C. Maxfield
no flags
WIP (19.04 KB, patch)
2020-05-13 23:26 PDT, Myles C. Maxfield
no flags
Patch (27.12 KB, patch)
2020-05-14 18:10 PDT, Myles C. Maxfield
no flags
Patch (39.27 KB, patch)
2020-05-14 22:49 PDT, Myles C. Maxfield
no flags
??? (4.42 KB, text/plain)
2020-05-18 00:55 PDT, Myles C. Maxfield
no flags
Patch (40.97 KB, patch)
2020-05-18 01:07 PDT, Myles C. Maxfield
no flags
Patch (41.42 KB, patch)
2020-05-18 01:32 PDT, Myles C. Maxfield
no flags
Patch (44.97 KB, patch)
2020-05-18 02:01 PDT, Myles C. Maxfield
no flags
Patch (44.97 KB, patch)
2020-05-18 02:03 PDT, Myles C. Maxfield
dino: review+
Myles C. Maxfield
Comment 1 2020-05-13 23:14:38 PDT
Radar WebKit Bug Importer
Comment 2 2020-05-13 23:15:02 PDT
Myles C. Maxfield
Comment 3 2020-05-13 23:16:49 PDT
Remaining: Test usage flags, determine which of the gazillion tests are relevant / interesting (maybe all of them?), and create an -expected.txt file.
Myles C. Maxfield
Comment 4 2020-05-13 23:17:07 PDT
(In reply to Myles C. Maxfield from comment #3) > Remaining: Test usage flags, determine which of the gazillion tests are > relevant / interesting (maybe all of them?), and create an -expected.txt > file. And make sure all the validation in this patch is present in the spec.
Myles C. Maxfield
Comment 5 2020-05-13 23:18:06 PDT
Myles C. Maxfield
Comment 6 2020-05-13 23:25:12 PDT
Comment on attachment 399336 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=399336&action=review > LayoutTests/webgpu/texture-creation.html:31 > + assert_equals(error, null, "Texture failed to be created"); This could use error.message.
Myles C. Maxfield
Comment 7 2020-05-13 23:26:43 PDT
Myles C. Maxfield
Comment 8 2020-05-13 23:33:34 PDT
(In reply to Myles C. Maxfield from comment #4) > (In reply to Myles C. Maxfield from comment #3) > > Remaining: Test usage flags, determine which of the gazillion tests are > > relevant / interesting (maybe all of them?), and create an -expected.txt > > file. > > And make sure all the validation in this patch is present in the spec. And test maximum texture sizes
Myles C. Maxfield
Comment 9 2020-05-14 18:02:33 PDT
...And make sure the rest of the tests still pass
Myles C. Maxfield
Comment 10 2020-05-14 18:10:42 PDT
Myles C. Maxfield
Comment 11 2020-05-14 22:49:06 PDT
Myles C. Maxfield
Comment 12 2020-05-14 22:56:32 PDT
Comment on attachment 399456 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=399456&action=review > LayoutTests/webgpu/texture-creation.html:89 > + let texture = device.createTexture({size: {width: test.width, height: test.height, depth: test.depth}, mipLevelCount: test.mipLevelCount, sampleCount: test.sampleCount, dimension: test.dimension, format: test.format, usage: test.usage}); We can just pass in the test object instead of copying all the values out of it. Also, we should test default values.
Myles C. Maxfield
Comment 13 2020-05-14 22:56:38 PDT
Comment on attachment 399456 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=399456&action=review > LayoutTests/webgpu/texture-creation.html:89 > + let texture = device.createTexture({size: {width: test.width, height: test.height, depth: test.depth}, mipLevelCount: test.mipLevelCount, sampleCount: test.sampleCount, dimension: test.dimension, format: test.format, usage: test.usage}); We can just pass in the test object instead of copying all the values out of it. Also, we should test default values.
Myles C. Maxfield
Comment 14 2020-05-17 22:13:26 PDT
The test fails on my machine even without my patch applied.
Myles C. Maxfield
Comment 15 2020-05-17 23:51:57 PDT
I can reproduce the failure in a standalone Swift Playground: import Foundation import Metal let device = MTLCreateSystemDefaultDevice()! let textureDescriptor = MTLTextureDescriptor() textureDescriptor.textureType = .type2DArray textureDescriptor.pixelFormat = .rgba8Unorm textureDescriptor.width = 512 textureDescriptor.height = 512 textureDescriptor.depth = 1 textureDescriptor.mipmapLevelCount = 9 textureDescriptor.sampleCount = 1 textureDescriptor.arrayLength = 2 textureDescriptor.storageMode = .managed textureDescriptor.usage = .shaderRead let texture = device.makeTexture(descriptor: textureDescriptor)! let buffer = device.makeBuffer(length: 1048576, options: .storageModeShared)! let ptr = buffer.contents().bindMemory(to: UInt8.self, capacity: 4) ptr[0] = 0 ptr[1] = 255 ptr[2] = 0 ptr[3] = 255 let queue = device.makeCommandQueue()! let commandBuffer = queue.makeCommandBuffer()! let encoder = commandBuffer.makeBlitCommandEncoder()! encoder.copy(from: buffer, sourceOffset: 0, sourceBytesPerRow: 2048, sourceBytesPerImage: 0, sourceSize: MTLSize(width: 512, height: 512, depth: 1), to: texture, destinationSlice: 0, destinationLevel: 0, destinationOrigin: MTLOrigin(x: 0, y: 0, z: 0)) encoder.synchronize(resource: texture) encoder.endEncoding() commandBuffer.addCompletedHandler {(commandBuffer) in var bytes = Array(repeating: UInt8(), count: 4) texture.getBytes(&bytes, bytesPerRow: 4, bytesPerImage: 4, from: MTLRegion(origin: MTLOrigin(x: 0, y: 0, z: 0), size: MTLSize(width: 1, height: 1, depth: 1)), mipmapLevel: 0, slice: 1) bytes[0] bytes[1] bytes[2] bytes[3] } commandBuffer.commit() RunLoop.main.run()
Myles C. Maxfield
Comment 16 2020-05-17 23:54:56 PDT
(The result is [0, 0, 255, 59], which is unexpected.)
Myles C. Maxfield
Comment 17 2020-05-17 23:57:09 PDT
GFX9_MtlDevice AMD Radeon Pro Vega 56
Myles C. Maxfield
Comment 18 2020-05-18 00:08:47 PDT
Here's a better version: import Foundation import Metal let device = MTLCopyAllDevices()[0] device.name let textureDescriptor = MTLTextureDescriptor() textureDescriptor.textureType = .type2DArray textureDescriptor.pixelFormat = .rgba8Unorm textureDescriptor.width = 512 textureDescriptor.height = 512 textureDescriptor.depth = 1 textureDescriptor.mipmapLevelCount = 9 textureDescriptor.sampleCount = 1 textureDescriptor.arrayLength = 2 textureDescriptor.storageMode = .managed textureDescriptor.usage = .shaderRead let texture = device.makeTexture(descriptor: textureDescriptor)! let buffer = device.makeBuffer(length: 1048576, options: .storageModeShared)! let ptr = buffer.contents().bindMemory(to: UInt8.self, capacity: 1048576) ptr[0] = 0 ptr[1] = 255 ptr[2] = 0 ptr[3] = 255 let queue = device.makeCommandQueue()! let commandBuffer = queue.makeCommandBuffer()! let encoder = commandBuffer.makeBlitCommandEncoder()! encoder.copy(from: buffer, sourceOffset: 0, sourceBytesPerRow: 2048, sourceBytesPerImage: 1048576, sourceSize: MTLSize(width: 1, height: 1, depth: 1), to: texture, destinationSlice: 0, destinationLevel: 0, destinationOrigin: MTLOrigin(x: 0, y: 0, z: 0)) encoder.synchronize(resource: texture) encoder.endEncoding() commandBuffer.addCompletedHandler {(commandBuffer) in if commandBuffer.error != nil { print("\(commandBuffer.error!)") } var bytes = Array(repeating: UInt8(), count: 1048576) texture.getBytes(&bytes, bytesPerRow: 4, bytesPerImage: 4, from: MTLRegion(origin: MTLOrigin(x: 0, y: 0, z: 0), size: MTLSize(width: 512, height: 512, depth: 1)), mipmapLevel: 0, slice: 0) bytes[0] bytes[1] bytes[2] bytes[3] } commandBuffer.commit() RunLoop.main.run()
Myles C. Maxfield
Comment 19 2020-05-18 00:23:30 PDT
Oh, if you switch bytesPerRow: 4, bytesPerImage: 4 with bytesPerRow: 2048, bytesPerImage: 1048576 Then it looks like it starts working.
Myles C. Maxfield
Comment 20 2020-05-18 00:32:15 PDT
Changing the test to pass in a non-zero imageHeight doesn't solve the problem.
Myles C. Maxfield
Comment 21 2020-05-18 00:55:00 PDT
Myles C. Maxfield
Comment 22 2020-05-18 01:07:17 PDT
Myles C. Maxfield
Comment 23 2020-05-18 01:14:03 PDT
const textureDataBufferDescriptor = { size: imageData.data.length, usage: GPUBufferUsage.COPY_SRC | GPUBufferUsage.MAP_READ }; Seems to fix it for some reason?
Myles C. Maxfield
Comment 24 2020-05-18 01:22:48 PDT
Looks like GPUBuffer::copyStagingBufferToGPU() doesn't call -[MTLBuffer didModifyRange:]
Myles C. Maxfield
Comment 25 2020-05-18 01:32:41 PDT
Myles C. Maxfield
Comment 26 2020-05-18 01:37:45 PDT
Looks like the combination of no mapping flags along with createBufferMapped is busted somehow.
Myles C. Maxfield
Comment 27 2020-05-18 02:01:48 PDT
Myles C. Maxfield
Comment 28 2020-05-18 02:03:00 PDT
Dean Jackson
Comment 29 2020-05-18 11:16:12 PDT
Comment on attachment 399633 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=399633&action=review Remember to not commit the change to the scheme. > Source/WebCore/WebCore.xcodeproj/xcshareddata/xcschemes/WebCore.xcscheme:47 > + <PathRunnable > + runnableDebuggingMode = "0" > + BundleIdentifier = "org.webkit.MiniBrowser" > + FilePath = "/Users/mmaxfield/Build/Products/Debug/MiniBrowser.app"> > + </PathRunnable> oops > Source/WebCore/platform/graphics/gpu/GPUDevice.cpp:118 > + if (descriptor.size.width > 8192 || descriptor.size.height > 8192) { Please static const this max value. > Source/WebCore/platform/graphics/gpu/GPUDevice.cpp:122 > + if (descriptor.size.depth > 2048) { And this. > Source/WebCore/platform/graphics/gpu/GPUDevice.cpp:128 > + if (descriptor.size.width > 2048 || descriptor.size.height > 2048 || descriptor.size.depth > 2048) { And this. Also, should we provide the maximum values in the error?
Simon Fraser (smfr)
Comment 30 2020-05-18 12:06:22 PDT
Comment on attachment 399633 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=399633&action=review > Source/WebCore/ChangeLog:10 > + Add lots of validation for texture creation. The logic was gathered from > + trial and error. "by trial and error" > Source/WebCore/platform/graphics/gpu/GPUDevice.cpp:83 > + for (; !(maxDimension & (1 << i)) && i >= 0; --i) { > + } Weird to have this empty loop; maybe a while() is clearer? Is this just finding where the high bits are? >> Source/WebCore/platform/graphics/gpu/GPUDevice.cpp:128 >> + if (descriptor.size.width > 2048 || descriptor.size.height > 2048 || descriptor.size.depth > 2048) { > > And this. > > Also, should we provide the maximum values in the error? Yes. > Source/WebCore/platform/graphics/gpu/GPUDevice.cpp:141 > + m_errorScopes->generatePrefixedError("Too many miplevels."); Show the values in the error. > Source/WebCore/platform/graphics/gpu/GPUObjectBase.h:43 > + GPUErrorScopes& errorScopes() const { return m_errorScopes; } Return const GPUErrorScopes&? > Source/WebCore/platform/graphics/gpu/GPUTextureUsage.h:44 > CopySource = 1 << 0, > CopyDestination = 1 << 1, > Sampled = 1 << 2, > Storage = 1 << 3, > OutputAttachment = 1 << 4, > + MaximumValue = 1 << 5, Cool kids are lining these up these days. > Source/WebCore/platform/graphics/gpu/cocoa/GPUTextureMetal.mm:163 > + errorScopes.generatePrefixedError("Device does not support this sample count."); Error should show the value of descriptor.sampleCount.
Myles C. Maxfield
Comment 31 2020-05-18 18:48:28 PDT
Note You need to log in before you can comment on or make changes to this bug.