With MockMSE running in the GPU process, the test is a permanent error. ``` RUN(sourceBuffer.appendBuffer(samples)) EVENT(updateend) RUN(quality = video.getVideoPlaybackQuality()) -EXPECTED (quality.totalVideoFrames == '8') OK -EXPECTED (quality.corruptedVideoFrames == '1') OK -EXPECTED (quality.droppedVideoFrames == '2') OK -EXPECTED (quality.totalFrameDelay == '3') OK +EXPECTED (quality.totalVideoFrames == '8'), OBSERVED '1' FAIL +EXPECTED (quality.corruptedVideoFrames == '1'), OBSERVED '0' FAIL +EXPECTED (quality.droppedVideoFrames == '2'), OBSERVED '1' FAIL +EXPECTED (quality.totalFrameDelay == '3'), OBSERVED '0' FAIL END OF TEST ``` The test waits for the `updateend` event to be fired on SourceBuffer before calling `getVideoPlaybackQuality` , but this would only work when the MockMSE runs in the content process due to an implementation detailed with the MockMediaPlayerMediaSource: it updates the time and playback quality immediately. However, in practice this is rather nonsensical, we can only expect the VideoPlaybackQuality object to be up to date once the video has been fully played: frames won't be dropped while the video is paused.
<rdar://problem/107182984>
Pull request: https://github.com/WebKit/WebKit/pull/11915
Committed 262071@main (ad2552fcc7cd): <https://commits.webkit.org/262071@main> Reviewed commits have been landed. Closing PR #11915 and removing active labels.