The logic in WebGLRenderingContext::validateCompressedTexDimensions is overly permissive. For example, it claims that 256x256 is a valid set of dimensions for a 1x1 buffer.
<rdar://problem/15818118>
Part of the problem is that it doesn't actually validate that the dimensions are correct, just that they are even multiples of the compression block size.
Created attachment 221218 [details] Patch
Comment on attachment 221218 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=221218&action=review > Source/WebCore/html/canvas/WebGLRenderingContext.cpp:5391 > + const GC3Dint maxTextureSize = (target) ? m_maxTextureSize : m_maxCubeMapTextureSize; No need for ()
Committed r162031: <http://trac.webkit.org/changeset/162031>
Still seeing some test output diffs: http://build.webkit.org/results/Apple%20Mavericks%20Release%20WK2%20(Tests)/r162050%20(2225)/fast/canvas/webgl/webgl-compressed-texture-size-limit-pretty-diff.html
(In reply to comment #6) > Still seeing some test output diffs: > http://build.webkit.org/results/Apple%20Mavericks%20Release%20WK2%20(Tests)/r162050%20(2225)/fast/canvas/webgl/webgl-compressed-texture-size-limit-pretty-diff.html I added a ML baseline for the difference in driver error output.
ML Rebaseline in https://trac.webkit.org/r162070.