NEW 270622
[GStreamer][MSE] playback doesn't start if less than 1s of video is available.
https://bugs.webkit.org/show_bug.cgi?id=270622
Summary [GStreamer][MSE] playback doesn't start if less than 1s of video is available.
Jean-Yves Avenard [:jya]
Reported 2024-03-07 01:24:26 PST
In bug 270614 I added a test which loads 1s of video, call play() and waits for playback to stall. ``` loader = new MediaSourceLoader('content/test-fragmented-video-manifest.json'); await loaderPromise(loader); video.disableRemotePlayback = true; video.muted = true; run('source = new ManagedMediaSource()'); run('video.src = URL.createObjectURL(source)'); await waitFor(source, 'sourceopen'); run('sourceBuffer = source.addSourceBuffer(loader.type())'); run('sourceBuffer.appendBuffer(loader.initSegment())'); await waitFor(sourceBuffer, 'update'); run('sourceBuffer.appendBuffer(loader.mediaSegment(0))'); await waitFor(sourceBuffer,'update'); run('sourceBuffer.appendBuffer(loader.mediaSegment(2))'); await waitFor(sourceBuffer,'update'); run('video.play()'); await waitFor(video, 'playing'); await Promise.all([ testExpectedEventuallySilent('video.currentTime', 1, '>='), waitFor(video, 'waiting') ]); testExpected('video.currentTime', 1, '>='); currentTimeWhenStalling = video.currentTime; // Issue pause() command while playback has stalled. run('video.pause()'); // Fill gap, playback shouldn't continue, even briefly. run('sourceBuffer.appendBuffer(loader.mediaSegment(1))'); await sleepFor(1000); testExpected('video.currentTime == currentTimeWhenStalling', true); endTest(); ``` runs with GStreamer based players show that playback doesn't reach the currentTime=1s mark, we never stall and the waiting event is never fired. https://ews-build.s3-us-west-2.amazonaws.com/GTK-WK2-Tests-EWS/2bf004ac-42149-stress-mode/media/media-source/media-managedmse-noresumeafterpause-pretty-diff.html
Attachments
Philippe Normand
Comment 1 2024-03-07 01:26:00 PST
For posterity, here is the diff (those EWS urls expire): --- /home/ews/worker/GTK-WK2-Tests-EWS/build/layout-test-results/media/media-source/media-managedmse-noresumeafterpause-expected.txt +++ /home/ews/worker/GTK-WK2-Tests-EWS/build/layout-test-results/media/media-source/media-managedmse-noresumeafterpause-actual.txt @@ -1,3 +1,5 @@ +FAIL: Timed out waiting for notifyDone to be called + RUN(source = new ManagedMediaSource()) RUN(video.src = URL.createObjectURL(source)) @@ -11,10 +13,4 @@ EVENT(update) RUN(video.play()) EVENT(playing) -EVENT(waiting) -EXPECTED (video.currentTime >= '1') OK -RUN(video.pause()) -RUN(sourceBuffer.appendBuffer(loader.mediaSegment(1))) -EXPECTED (video.currentTime == currentTimeWhenStalling == 'true') OK -END OF TEST
Alicia Boya García
Comment 2 2025-02-27 03:12:57 PST
I see there is no source.endOfStream() in that test, so I wonder if that's where it's getting stuck. The ffmpeg decoding API use by the avdec_* GStreamer elements has the problem that, lacking endOfStream, a small number of frames may never come out of the decoder. Simplified, the loop works like this (pseudocode): ``` def decode_frame(encoded_frame): avcodec_send_packet(encoded_frame) for decoded_frame in avcodec_receive_frame(): push_decoded_frame(decoded_frame) ``` In order to keep decode_frame() and push_decoded_frame() running in the same thread, whenever you call avcodec_send_packet() with a frame to decode, the frame is given to a worker thread to decode. avcodec_send_packet() only blocks waiting if all the worker threads are busy. avdec_receive_frame() is a polling operation and will give no frames in the first few calls. The end of stream is handled by using NULL as encoded_frame, which causes avcodec_send_packet() to wait for all the decoding to be done(). This API works fine in most playback scenarios, but it is rather subpar for the case of MSE and it often causes trouble in tests that don't do endOfStream(). Note that this is a consequence of how the avdec_* elements work. In principle, it should be possible to modify them or create a different decoder element where the output pad has its own thread and waits blocking for frames to be decoded (assuming ffmpeg has an API to do that, which I hope it does but I haven't searched very hard for it). This would be a less than trivial change however.
Enrique Ocaña
Comment 3 2025-02-27 12:36:20 PST
I can confirm that the issue Alicia is describing is that's actually heppening. I've placed debug probes in the avdec_h264 video decoder and have confirmed that buffers with PTS from [0, 1] seconds enter into the decoder, but only buffers with PTS [0, 0.70] get out of it. Still, I've been trying to workaround this issue by: - Calling MediaSource.endOfStream() before checking if currentTime >= 1 (but it doesn't help in this case, since the code appends [0, 1] and [2, 3] (with a gap on purpose)). - Appending twice the amount of data on each step. - Relaxing the check for currentTime to a position before the maximum time that avdec_h264 is able to output in my system (because apparently this depends on the number of available cores/threads). ...with no success. Next thing I'm going to check when I can is how the "waiting" event would be produced in an ideal case.
Alicia Boya García
Comment 4 2025-02-28 07:17:57 PST
> it doesn't help in this case, since the code appends [0, 1] and [2, 3] (with a gap on purpose) This is certainly a tricky edge case.
Alicia Boya García
Comment 5 2025-03-07 07:50:47 PST
I found out Philippe already introduced a change to set max-threads=2 in video avdec* elements to help with this. Given that playback is stuck at 0.7s, I suspect something else is introducing a delay, but I don't have more time to look at it today.
Enrique Ocaña
Comment 6 2025-05-12 12:58:22 PDT
I've been trying to move this bug forward a bit, with no success so far. First, the "waiting" event is produced when currentTime has reached the end of the current buffered ranges interval being played (so there's no more to play). Since the video decoder retains too many buffers, currentTime doesn't progress beyond 0.70s, never reaching the end of the interval (1s), so the "waiting" event is never emitted. There's nothing that can be done to improve this except convincing the video decoder (avdec) to emit all the frames. A possible way to do that would be by issuing a Drain query. I've written experimental code to perform that query on the peer pad of the WebKitMediaSrc video pad. The query goes forward downstream, down to the video sink (which flushes the buffer because that's what a drain query means for it, but that's another story). However, there's no trace of the video decoder emitting any extra frame because of the Drain query. This means that this Drain query approach is unsuccessful for our goal. I'm out of ideas about how to improve the situation of this bug and will switch tasks by now.
Enrique Ocaña
Comment 7 2025-05-16 06:08:52 PDT
Alicia hinted me about the still frame event (thanks!), which does the same as I wanted to achieve with the drain query. I've implemented a prototype patch using it and I got avdec emitting all the buffers and the test progressing beyond the wait state. It now fails because the video progresses a bit beyond 1s. This is because the support for mp4 edit lists added in https://bugs.webkit.org/show_bug.cgi?id=231019 was reverted on https://bugs.webkit.org/show_bug.cgi?id=244428. I'm going to mark that bug as a dependency for this one.
Enrique Ocaña
Comment 8 2025-05-19 11:13:52 PDT
The fix for 1 sec playback is being tracked as bug 293239.
Note You need to log in before you can comment on or make changes to this bug.