The performance of texImage2D and texSubImage2D calls passing a video is very slow (+4ms) for most video sizes (+240p).
The linked in test can record the timings these uploads and the framerate, and it contains historical measurements across a variety of browsers and platforms.
Retested with Chrome 36.0.1985.125, Firefox Aurora 33.0a2, Safari 6.1.4 and Internet Explorer 11
TL;DR Firefox now outperforms Chrome by around 600% on OSX and Linux. Safari has the worst performance and worst video format support.
Summary of result tables, for full results please see: http://codeflow.org/issues/slow_video_to_texture/
Ubuntu:
FPS UPms FPS UPms FPS UPms
Chrome 36 1080p 46.42 8.08 46.22 7.97 46.72 7.80
Chrome 35 1080p 45.28 8.02 45.10 8.03 45.39 7.98
Firefox Aurora 1080p 60.04 1.00 60.04 1.14 60.02 1.18
OSX:
FPS UPms FPS UPms FPS UPms
Chrome 36 1080p 59.56 9.91 59.72 9.96 59.78 9.91
Chrome 35 1080p 59.77 13.47 58.88 14.51 59.85 13.69
Firefox Aurora 1080p N/A N/A 59.50 1.54 59.07 1.50
Safari 6.1.4 1080p 20.07 34.94 N/A N/A N/A N/A
Windows 8.1:
FPS UPms FPS UPms FPS UPms
Chrome 36 1080p 51.22 11.53 59.53 7.16 59.23 7.23
Firefox Aurora 1080p 57.14 9.55 56.89 9.41
IE11 video not supported for upload
Conclusion: Firefox has made good efforts in its Aurora build (scheduled to be beta early September, and release early October) on OSX and Linux, these now outperform Chrome by a substantial margin (by ~600%) and pushed upload times into the range of 1-2ms. Chrome 36 has also made some gains on Chrome 35 on OSX (by ~45%). Firefox has yet to implement their improvements on Windows and has also got to deal with bugs in their h.264 support on Windows.
Safari Issues:
- Only H.264 is supported
- The H.264 support is very slow and choppy
Created attachment 252875[details]
Archive of layout-test-results from ews100 for mac-mavericks
The attached test failures were seen while running run-webkit-tests on the mac-ews.
Bot: ews100 Port: mac-mavericks Platform: Mac OS X 10.9.5
Created attachment 252877[details]
Archive of layout-test-results from ews107 for mac-mavericks-wk2
The attached test failures were seen while running run-webkit-tests on the mac-wk2-ews.
Bot: ews107 Port: mac-mavericks-wk2 Platform: Mac OS X 10.9.5
Created attachment 252891[details]
Archive of layout-test-results from ews101 for mac-mavericks
The attached test failures were seen while running run-webkit-tests on the mac-ews.
Bot: ews101 Port: mac-mavericks Platform: Mac OS X 10.9.5
Created attachment 252892[details]
Archive of layout-test-results from ews106 for mac-mavericks-wk2
The attached test failures were seen while running run-webkit-tests on the mac-wk2-ews.
Bot: ews106 Port: mac-mavericks-wk2 Platform: Mac OS X 10.9.5
Created attachment 252898[details]
Archive of layout-test-results from ews100 for mac-mavericks
The attached test failures were seen while running run-webkit-tests on the mac-ews.
Bot: ews100 Port: mac-mavericks Platform: Mac OS X 10.9.5
Created attachment 252900[details]
Archive of layout-test-results from ews104 for mac-mavericks-wk2
The attached test failures were seen while running run-webkit-tests on the mac-wk2-ews.
Bot: ews104 Port: mac-mavericks-wk2 Platform: Mac OS X 10.9.5
Florian, your server does not support HTTP Range Requests; as such, WebKit falls back to 32-bit QuickTime to load, decode, and display these movies. This may go a long way towards explaining why WebKit seems to take so much longer to upload images.
(In reply to comment #19)
> Florian, your server does not support HTTP Range Requests; as such, WebKit
> falls back to 32-bit QuickTime to load, decode, and display these movies.
> This may go a long way towards explaining why WebKit seems to take so much
> longer to upload images.
It's true that my server does not support range requests. However, two points about that.
1) So you're saying that to performantly put video into WebGL on the client you have to serve video with range requests. Or do you rather mean to express that this is a bug that you haven't filed yet? Do you want me to file it? I'd be happy to file "WebKit does not performantly handle video if the server does not support http range requests". Unfortunately that will then attract half a dozen other bug tickets, see point #2.
2) Browser behavior regarding to requests for video in regards to ranges is highly inconsistent, non http conformant and frequently leads to videos breaking. Actually supporting range requests for videos at this time is inadvisable, and the official recommendation I've gotten by Google and Apple is to use MSE to handle streaming video and not to rely on http range requests.
(In reply to comment #20)
> (In reply to comment #19)
> > Florian, your server does not support HTTP Range Requests; as such, WebKit
> > falls back to 32-bit QuickTime to load, decode, and display these movies.
> > This may go a long way towards explaining why WebKit seems to take so much
> > longer to upload images.
>
> It's true that my server does not support range requests. However, two
> points about that.
>
> 1) So you're saying that to performantly put video into WebGL on the client
> you have to serve video with range requests. Or do you rather mean to
> express that this is a bug that you haven't filed yet? Do you want me to
> file it? I'd be happy to file "WebKit does not performantly handle video if
> the server does not support http range requests". Unfortunately that will
> then attract half a dozen other bug tickets, see point #2.
No, I'm saying that expecting fast texture upload from an out-of-process, 32-bit media stack as ancient as QuickTime is unreasonable.
> 2) Browser behavior regarding to requests for video in regards to ranges is
> highly inconsistent, non http conformant and frequently leads to videos
> breaking. Actually supporting range requests for videos at this time is
> inadvisable, and the official recommendation I've gotten by Google and Apple
> is to use MSE to handle streaming video and not to rely on http range
> requests.
That is entirely misinformed, and I HIGHLY doubt anyone at Apple told you to use MSE to do streaming video.
HTTP Range Request support is a necessity. Without it, media-over-HTTP will not play at all in iOS, and soon, it will not be supported on desktop Safari as well.
(In reply to comment #21)
> No, I'm saying that expecting fast texture upload from an out-of-process,
> 32-bit media stack as ancient as QuickTime is unreasonable.
Then fix it. I see no reason you should do that no matter where the bytes you consume come from. That constitutes a bug if I ever saw one. Plus, do you even have a testcase of your own to check video upload speed? Cause I kinda doubt it, seeing as this issue goes into its 4th year aniversary and nobody done jack squat about it. So if you disagree with my testcase, or sever, or whatever, go make your own, I promise it's quite an eye opening experience to see how utterly horrid the implementation across most every UA, including yours, is.
> That is entirely misinformed, and I HIGHLY doubt anyone at Apple told you to
> use MSE to do streaming video.
>
> HTTP Range Request support is a necessity. Without it, media-over-HTTP will
> not play at all in iOS, and soon, it will not be supported on desktop Safari
> as well.
Well, good luck with that, don't say I didn't warn you that it's a shambles.
(In reply to comment #22)
> (In reply to comment #21)
> > No, I'm saying that expecting fast texture upload from an out-of-process,
> > 32-bit media stack as ancient as QuickTime is unreasonable.
>
> Then fix it.
No. QuickTime and QTKit are deprecated technologies. Support for QTKit has been disabled in WebKit Nightlies, and will be disabled in Safari 9.
> I see no reason you should do that no matter where the bytes
> you consume come from. That constitutes a bug if I ever saw one. Plus, do
> you even have a testcase of your own to check video upload speed? Cause I
> kinda doubt it, seeing as this issue goes into its 4th year anniversary
...2nd...
> and
> nobody done jack squat about it.
You do realize you're directing this at the person who is currently in the middle of fixing the very issue you're complaining about (video upload speed), right?
> So if you disagree with my testcase, or
> sever, or whatever, go make your own, I promise it's quite an eye opening
> experience to see how utterly horrid the implementation across most every
> UA, including yours, is.
The number of loads which fall back to QTKit due to lack of server Range support is incredibly, incredibly small. So, the balance of probabilities leads me to believe that the problem is not with HTTP Range requests, but how well your server implement them.
Comment on attachment 252893[details]
Patch
View in context: https://bugs.webkit.org/attachment.cgi?id=252893&action=review> Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateAVFoundationObjC.h:242
> + virtual bool copyVideoTextureToPlatformTexture(GraphicsContext3D*, Platform3DObject, GC3Denum target, GC3Dint level, GC3Denum internalFormat, GC3Denum format, GC3Denum type, bool premultiplyAlpha, bool flipY) override;
Nit: don't need both "virtual" and "override" (the later implies the former).
> Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateAVFoundationObjC.mm:2370
> +void MediaPlayerPrivateAVFoundationObjC::destroyOpenGLVideoOutput()
This is never called.
> Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateAVFoundationObjC.mm:2382
> +bool MediaPlayerPrivateAVFoundationObjC::openGLVideoOutputHasAvailableFrame()
Nit: this and updateLastOpenGLImage should take the item time instead of both calling -[AVPlayerItem currentTime]
> Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateAVFoundationObjC.mm:2406
> +void MediaPlayerPrivateAVFoundationObjC::updateLastOpenGLImage()
> +{
> + if (!m_openGLVideoOutput)
> + return;
> +
> + CMTime currentTime = [m_avPlayerItem currentTime];
> + if (![m_openGLVideoOutput hasNewPixelBufferForItemTime:currentTime])
> + return;
> +
> + m_lastOpenGLImage = adoptCF([m_openGLVideoOutput copyPixelBufferForItemTime:currentTime itemTimeForDisplay:nil]);
> +}
Nit: this seems unnecessary - it is never called if m_openGLVideoOutput is null or there isn't a pixel buffer for the current time. Why not just use copyPixelBufferForItemTime:itemTimeForDisplay: directly?
I won't entertain more discussions of http range or not, that's got nothing whatsoever todo with tex[Sub]Image2D speed.
I have clocked an optimal path for putting the texels of a video into a texture using a C++ test program quite a while ago (just consult the chrome ticket that's been open for the last 4 years).
If you are uploading YCbCr (4:2:2) to the GPU from ram and then convert to RGB on GPU this should not take more than 2ms (even on fairly slow machines). And if you already have a YCbCr texture in vram, this will take approximately 10 microseconds.
(In reply to comment #26)
> Btw. my server does support ranges, I just checked, so I don't know why you
> would say it wouldn't.> curl -I -r 0-100 'http://codeflow.org/issues/slow_video_to_texture/mp4/720p.mp4'
HTTP/1.1 200 OK
Content-Length: 1446854
Expires: Thu, 13 Aug 2015 16:59:46 GMT
Server: ftlpy 0.1
Connection: Keep-Alive
ETag: 28d3d94bd36539de2218bd5beafe6d2c
Cache-Control: private, max-age=0, no-cache, no-transform
Date: Thu, 13 Aug 2015 16:59:46 GMT
Content-Type: video/mp4
If your server supported range requests, it would have returned a '206 Partial Content' response, not a '200 OK'.
(In reply to comment #24)
> Comment on attachment 252893[details]
> Patch
>
> View in context:
> https://bugs.webkit.org/attachment.cgi?id=252893&action=review
>
> > Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateAVFoundationObjC.h:242
> > + virtual bool copyVideoTextureToPlatformTexture(GraphicsContext3D*, Platform3DObject, GC3Denum target, GC3Dint level, GC3Denum internalFormat, GC3Denum format, GC3Denum type, bool premultiplyAlpha, bool flipY) override;
>
> Nit: don't need both "virtual" and "override" (the later implies the former).
Ok.
> > Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateAVFoundationObjC.mm:2370
> > +void MediaPlayerPrivateAVFoundationObjC::destroyOpenGLVideoOutput()
>
> This is never called.
Well, that's a mistake! :)
> > Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateAVFoundationObjC.mm:2382
> > +bool MediaPlayerPrivateAVFoundationObjC::openGLVideoOutputHasAvailableFrame()
>
> Nit: this and updateLastOpenGLImage should take the item time instead of
> both calling -[AVPlayerItem currentTime]
Actually, see below.
> > Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateAVFoundationObjC.mm:2406
> > +void MediaPlayerPrivateAVFoundationObjC::updateLastOpenGLImage()
> > +{
> > + if (!m_openGLVideoOutput)
> > + return;
> > +
> > + CMTime currentTime = [m_avPlayerItem currentTime];
> > + if (![m_openGLVideoOutput hasNewPixelBufferForItemTime:currentTime])
> > + return;
> > +
> > + m_lastOpenGLImage = adoptCF([m_openGLVideoOutput copyPixelBufferForItemTime:currentTime itemTimeForDisplay:nil]);
> > +}
>
> Nit: this seems unnecessary - it is never called if m_openGLVideoOutput is
> null or there isn't a pixel buffer for the current time. Why not just use
> copyPixelBufferForItemTime:itemTimeForDisplay: directly?
Maybe we shouldn't have a openGLVideoOutputHasAvailableFrame(), and should instead just call updateLastOpenGLImage() and check to see if m_lastOpenGLImage is non-NULL.
(In reply to comment #27)
> (In reply to comment #26)
> > Btw. my server does support ranges, I just checked, so I don't know why you
> > would say it wouldn't.
>
> > curl -I -r 0-100 'http://codeflow.org/issues/slow_video_to_texture/mp4/720p.mp4'
> HTTP/1.1 200 OK
> Content-Length: 1446854
> Expires: Thu, 13 Aug 2015 16:59:46 GMT
> Server: ftlpy 0.1
> Connection: Keep-Alive
> ETag: 28d3d94bd36539de2218bd5beafe6d2c
> Cache-Control: private, max-age=0, no-cache, no-transform
> Date: Thu, 13 Aug 2015 16:59:46 GMT
> Content-Type: video/mp4
>
> If your server supported range requests, it would have returned a '206
> Partial Content' response, not a '200 OK'.
GET /issues/slow_video_to_texture/mp4/720p.mp4 HTTP/1.1
Host: codeflow.org
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Accept-Encoding: identity;q=1, *;q=0
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.155 Safari/537.36
Accept: */*
Referer: http://codeflow.org/issues/slow_video_to_texture/mp4/720p.mp4
Accept-Language: en-US,en;q=0.8,de;q=0.6
Cookie: __utma=240947345.1694194387.1439291172.1439291172.1439291172.1; __utmc=240947345; __utmz=240947345.1439291172.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)
Range: bytes=0-
HTTP/1.1 206 Partial Content
Content-Length: 1446854
Accept-Ranges: bytes
Expires: Thu, 13 Aug 2015 17:04:42 GMT
Content-Range: bytes 0-1446853/1446854
Server: ftlpy 0.1
Connection: Keep-Alive
ETag: 28d3d94bd36539de2218bd5beafe6d2c
Cache-Control: private, max-age=0, no-cache, no-transform
Date: Thu, 13 Aug 2015 17:04:42 GMT
Content-Type: video/mp4
It just doesn't do it on HEAD because I've seen no UA do that for video.
(In reply to comment #25)
> I won't entertain more discussions of http range or not, that's got nothing
> whatsoever todo with tex[Sub]Image2D speed.
Well, except that once this bug is fixed, you'll still need Byte Range support enabled on your server to see it work.
> I have clocked an optimal path for putting the texels of a video into a
> texture using a C++ test program quite a while ago (just consult the chrome
> ticket that's been open for the last 4 years).
>
> If you are uploading YCbCr (4:2:2) to the GPU from ram and then convert to
> RGB on GPU this should not take more than 2ms (even on fairly slow
> machines). And if you already have a YCbCr texture in vram, this will take
> approximately 10 microseconds.
That assumes that the image starts off on the CPU. Modern hardware decoders will write directly to the GPU, so your "optimal" path would necessitate a GPU readback, even when the decoded image was already in RGB.
(In reply to comment #30)
> That assumes that the image starts off on the CPU. Modern hardware decoders
> will write directly to the GPU, so your "optimal" path would necessitate a
> GPU readback, even when the decoded image was already in RGB.
It assumes that you either have it on CPU already, or recognizes that you don't, in which case it's even faster because it's already on the GPU and you can do a GPU -> GPU transfer. In either case, whatever it assumes, implementations of this are bad, and have been, and still are.
(In reply to comment #31)
> (In reply to comment #30)
> > That assumes that the image starts off on the CPU. Modern hardware decoders
> > will write directly to the GPU, so your "optimal" path would necessitate a
> > GPU readback, even when the decoded image was already in RGB.
>
> It assumes that you either have it on CPU already, or recognizes that you
> don't, in which case it's even faster because it's already on the GPU and
> you can do a GPU -> GPU transfer.
You'll note, in the patch attached to this bug, that's exactly what we'll do.
Here are the results from Florian's test case with the current WIP patch:
upload resolution frame/s upload time (ms)
texImage2D 240p 57.64 0.89
texImage2D 480p 55.63 1.28
texImage2D 720p 45.82 2.18
texImage2D 1080p 53.94 1.76
texSubImage2D 240p 56.73 1.09
texSubImage2D 480p 36.82 1.06
texSubImage2D 720p 55.74 1.23
texSubImage2D 1080p 54.55 2.06
texImage2D 240p 55.81 1.39
texImage2D 480p 56.29 0.94
texImage2D 720p 55.61 1.14
texImage2D 1080p 54.81 1.83
texSubImage2D 240p 56.26 1.25
texSubImage2D 480p 56.28 0.95
texSubImage2D 720p 55.29 1.39
texSubImage2D 1080p 54.03 2.10
There may be a few more ms of efficiency to squeeze out, but it's generally very much improved.
That GPU to GPU patch sounds good!
Is there a chance that it might be included in the final iOS 9 release?
(I know - no comments on future releases, but maybe a 'maybe'? :-))
(In reply to comment #34)
> Here are the results from Florian's test case with the current WIP patch:
>
> upload resolution frame/s upload time (ms)
> texImage2D 240p 57.64 0.89
> texImage2D 480p 55.63 1.28
> texImage2D 720p 45.82 2.18
> texImage2D 1080p 53.94 1.76
> texSubImage2D 240p 56.73 1.09
> texSubImage2D 480p 36.82 1.06
> texSubImage2D 720p 55.74 1.23
> texSubImage2D 1080p 54.55 2.06
> texImage2D 240p 55.81 1.39
> texImage2D 480p 56.29 0.94
> texImage2D 720p 55.61 1.14
> texImage2D 1080p 54.81 1.83
> texSubImage2D 240p 56.26 1.25
> texSubImage2D 480p 56.28 0.95
> texSubImage2D 720p 55.29 1.39
> texSubImage2D 1080p 54.03 2.10
>
> There may be a few more ms of efficiency to squeeze out, but it's generally
> very much improved.
Why is the FPS not 60 if the upload time is so small? You're probably still taking upwards of 16ms for a video frame upload. I daresay, there's more than a "few" improvements you can squeeze out there, like approximately 1200%.
(In reply to comment #36)
> Why is the FPS not 60 if the upload time is so small? You're probably still
> taking upwards of 16ms for a video frame upload. I daresay, there's more
> than a "few" improvements you can squeeze out there, like approximately
> 1200%.
Commenting out the call to texImage2D and re-running the test least to frame rates which hover around 57fps; so there is some inefficiency outside of video texture upload, and out of scope of this bug, to figure out.
(In reply to comment #37)
> (In reply to comment #36)
> > Why is the FPS not 60 if the upload time is so small? You're probably still
> > taking upwards of 16ms for a video frame upload. I daresay, there's more
> > than a "few" improvements you can squeeze out there, like approximately
> > 1200%.
>
> Commenting out the call to texImage2D and re-running the test least to frame
> rates which hover around 57fps; so there is some inefficiency outside of
> video texture upload, and out of scope of this bug, to figure out.
There's a stripped down simple version of the test here: http://codeflow.org/issues/slow_video_to_texture/simple.html that you can use to compare against the more complex one.
If you modify your Statistics object to return a weighted moving average (that gives more weight to recent entries), the FPS averages for each test rapidly converge on ~59.7 FPS. This implies that the slowness is almost entirely in startup of each test. And indeed, we have some code which, if no GPU frames are available, blocks until a CPU frame can be generated. This was originally added for (canvas and WebGL) correctness reasons; and we are indeed hitting that path at the beginning of each of your tests.
(In reply to comment #39)
> If you modify your Statistics object to return a weighted moving average
> (that gives more weight to recent entries), the FPS averages for each test
> rapidly converge on ~59.7 FPS. This implies that the slowness is almost
> entirely in startup of each test. And indeed, we have some code which, if no
> GPU frames are available, blocks until a CPU frame can be generated. This
> was originally added for (canvas and WebGL) correctness reasons; and we are
> indeed hitting that path at the beginning of each of your tests.
You can modify the simple test to use a rolling average or start measuring only N frames in or something. Anyway, if you want to know how much better you've gotten beyond satisfying 60fps and a low upload time (which is somewhat unreliable because of asynchronicity) you should repeat the upload multiple times until you drop FPS.
Created attachment 259627[details]
Archive of layout-test-results from ews101 for mac-mavericks
The attached test failures were seen while running run-webkit-tests on the mac-ews.
Bot: ews101 Port: mac-mavericks Platform: Mac OS X 10.9.5
Created attachment 259629[details]
Archive of layout-test-results from ews104 for mac-mavericks-wk2
The attached test failures were seen while running run-webkit-tests on the mac-wk2-ews.
Bot: ews104 Port: mac-mavericks-wk2 Platform: Mac OS X 10.9.5
Created attachment 261950[details]
Archive of layout-test-results from ews100 for mac-mavericks
The attached test failures were seen while running run-webkit-tests on the mac-ews.
Bot: ews100 Port: mac-mavericks Platform: Mac OS X 10.9.5
Created attachment 261952[details]
Archive of layout-test-results from ews104 for mac-mavericks-wk2
The attached test failures were seen while running run-webkit-tests on the mac-wk2-ews.
Bot: ews104 Port: mac-mavericks-wk2 Platform: Mac OS X 10.9.5
Created attachment 261963[details]
Patch
Finally figured out the failing test; I wasn't handling the flipY = true case. This should finally make the EWS bots happy.
Comment on attachment 262015[details]
Patch
View in context: https://bugs.webkit.org/attachment.cgi?id=262015&action=review> Source/WebCore/ChangeLog:9
> + when lazily-created, will emit CVPixelBuffers which are guaranteed to be compatable with
Typo: compatible
> Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateAVFoundationObjC.mm:2430
> +}
> +
> +
> +#if !LOG_DISABLED
Extra blank line.
> Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateAVFoundationObjC.mm:2566
> + size_t height = CVPixelBufferGetHeight(m_lastOpenGLImage.get());
> +
> +
> +#if PLATFORM(IOS)
Another extra line.
> Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateAVFoundationObjC.mm:2597
> + GC3Denum readFramebufferTarget = GraphicsContext3D::READ_FRAMEBUFFER;
> + GC3Denum readFramebufferBinding = GraphicsContext3D::READ_FRAMEBUFFER_BINDING;
I'm not sure it makes it any clearer to have these variables around that point to constants. You never change their values. It might be fine to just explicitly include them as parameters when you use them.
> texSubImage2D 480p 36.82 1.06
> texSubImage2D 720p 55.74 1.23
Why is 480p slower than 720p? Is it because the test doesn't converge in time?
It would be nice if this patch could be automatically tested. However, given the state of our testing, I doubt we could get reliable results.
(In reply to comment #55)
> > texSubImage2D 480p 36.82 1.06
> > texSubImage2D 720p 55.74 1.23
>
> Why is 480p slower than 720p? Is it because the test doesn't converge in
> time?
>
> It would be nice if this patch could be automatically tested. However, given
> the state of our testing, I doubt we could get reliable results.
See the comments following. The frame rate averages are thrown off greatly by the first few frames, which are, at that point, still using the software painting path. We need time to spin up the AVPlayerItemOutput which is capable of generating OpenGL-compatable buffers. If you use a weighted moving-average, you see that we hover right at 60 fps during steady-state.
Hello,
Does anyone know if the patch landed in Safari iOS 9.3.2? (This there a way for me to find out, other then observation?)
Thank you!
-- Freddy Snijder
2015-05-11 10:29 PDT, Jer Noble
2015-05-11 10:47 PDT, Jer Noble
2015-05-11 11:41 PDT, Build Bot
2015-05-11 11:48 PDT, Build Bot
2015-05-11 13:11 PDT, Jer Noble
2015-05-11 14:04 PDT, Build Bot
2015-05-11 14:10 PDT, Build Bot
2015-05-11 14:53 PDT, Jer Noble
2015-05-11 15:46 PDT, Build Bot
2015-05-11 15:52 PDT, Build Bot
2015-08-20 16:58 PDT, Jer Noble
2015-08-21 09:53 PDT, Jer Noble
2015-08-21 10:30 PDT, Build Bot
2015-08-21 10:31 PDT, Build Bot
2015-09-25 15:34 PDT, Jer Noble
2015-09-25 15:56 PDT, Build Bot
2015-09-25 16:16 PDT, Build Bot
2015-09-25 23:23 PDT, Jer Noble
2015-09-28 13:11 PDT, Jer Noble
2015-11-18 16:40 PST, Jer Noble