WebKit Bugzilla
New
Browse
Log In
×
Sign in with GitHub
or
Remember my login
Create Account
·
Forgot Password
Forgotten password account recovery
RESOLVED FIXED
159836
[MediaStream, Mac] Render media stream audio buffers
https://bugs.webkit.org/show_bug.cgi?id=159836
Summary
[MediaStream, Mac] Render media stream audio buffers
George Ruan
Reported
2016-07-15 14:40:00 PDT
Implement AVSampleBufferAudioRenderer and AVSampleBufferSynchronizer in MediaPlayerPrivateMediaStream.
Attachments
WIP
(32.44 KB, patch)
2016-08-12 18:36 PDT
,
George Ruan
no flags
Details
Formatted Diff
Diff
Proposed patch.
(96.20 KB, patch)
2017-01-11 11:33 PST
,
Eric Carlson
jer.noble
: review+
Details
Formatted Diff
Diff
Patch for landing.
(96.29 KB, patch)
2017-01-11 17:58 PST
,
Eric Carlson
no flags
Details
Formatted Diff
Diff
Updated patch for landing.
(95.95 KB, patch)
2017-01-11 19:50 PST
,
Eric Carlson
no flags
Details
Formatted Diff
Diff
Show Obsolete
(3)
View All
Add attachment
proposed patch, testcase, etc.
Radar WebKit Bug Importer
Comment 1
2016-07-15 14:40:45 PDT
<
rdar://problem/27380390
>
George Ruan
Comment 2
2016-08-12 18:36:21 PDT
Created
attachment 285991
[details]
WIP
Eric Carlson
Comment 3
2017-01-11 11:33:34 PST
Created
attachment 298603
[details]
Proposed patch.
Jer Noble
Comment 4
2017-01-11 13:11:06 PST
Comment on
attachment 298603
[details]
Proposed patch. View in context:
https://bugs.webkit.org/attachment.cgi?id=298603&action=review
r=me with nits and comments.
> Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaStreamAVFObjC.mm:156 > +- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context > +{ > + UNUSED_PARAM(context); > + UNUSED_PARAM(keyPath); > + ASSERT(_parent); > + > + RetainPtr<WebAVSampleBufferStatusChangeListener> protectedSelf = self; > + if ([object isKindOfClass:getAVSampleBufferDisplayLayerClass()]) {
The way I like to do these KVO observer methods is like this: static void* DisplayLayerContext = &DisplayLayerContext; static void* AudioRendererContext = &AudioRendererContext; ... if (context == DisplayLayerContext) { ... } else if (context == AudioRendererContext) { ... }. Your way is totally fine too.
> Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaStreamAVFObjC.mm:269 > + MediaTime now = streamTime(); > + while (!queue.isEmpty()) { > + if (queue.first()->decodeTime() > now) > + break; > + queue.removeFirst(); > + };
For SourceBuffer, we use a variant of SampleMap here, to ensure the samples are dequeued in decode order. Specifically, we use a DecodeOrderSampleMap::MapType, where MapType is a std::map from a std::pair<> of DecodeTime, PresentationTime -> MediaTime. If you can guarantee decode order insertion into this queue, you don't need it, but if there's a chance that samples will come in out-of-order, you'll eventually need something like DecodeOrderSampleMap (or to sort the queue). With a std::map, this would turn into: queue.erase(queue.start(), queue.upper_bound({now, MediaTime::zeroTime()}); (Erase everything from the beginning up until the first sample with a decode time greater than now.)
> Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaStreamAVFObjC.mm:319 > + static const double audioRendererLatency = 0.2; > + auto audioTrack = m_audioTrackMap.get(track.id()); > + MediaTime timelineOffset = audioTrack->timelineOffset(); > + if (timelineOffset == MediaTime::invalidTime()) { > + timelineOffset = calculateTimelineOffset(sample, audioRendererLatency); > + audioTrack->setTimelineOffset(timelineOffset);
This...
> Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaStreamAVFObjC.mm:351 > + auto videoTrack = m_videoTrackMap.get(track.id()); > + MediaTime timelineOffset = videoTrack->timelineOffset(); > + if (timelineOffset == MediaTime::invalidTime()) { > + timelineOffset = calculateTimelineOffset(sample, 0.01);
And this. First off, we should probably document the 0.01s value (the same way audioRendererLatency is documented by making it a static const). But shouldn't the offset values be the same between the audio and video tracks? Like, should this be stored on a per MediaPlayer basis rather than a per-track basis, so that the audio and video doesn't get out of sync?
> Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaStreamAVFObjC.mm:455 > + m_pendingAudioSampleQueue.clear(); > + for (auto& renderer : m_audioRenderers.values()) { > + [m_statusChangeListener stopObservingRenderer:renderer.get()]; > + [renderer flush]; > + [renderer stopRequestingMediaData]; > + } > + m_audioRenderers.clear();
We don't remove the renderers from the synchronizer here, but we do above in destroyAudioRenderer(). Could we refactor this a bit to make a destroyAudioRenderer(AVSampleBufferAudioRenderer*), and call that from both destroyAudioRenderers() and destroyAudioRenderer(AtomicString) ?
> Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaStreamAVFObjC.mm:485 > + if (status.integerValue == AVQueuedSampleBufferRenderingStatusRendering) > + m_audioTrackMap.get(trackID)->setTimelineOffset(MediaTime::invalidTime());
I don't know that this is necessary. You'll probably get the KVO that triggers this after you've already started pushing in data; it may cause discontinuities.
> Source/WebCore/platform/graphics/avfoundation/objc/MediaSampleAVFObjC.mm:44 > +MediaTime MediaSampleAVFObjC::outputPresentationTime() const > +{ > + return toMediaTime(CMSampleBufferGetOutputPresentationTimeStamp(m_sample.get())); > +} > +
I don't think Output Presentation Timestamps are necessary here. I doubt we'll ever be dealing with samples that play backwards, or are trimmed.
Eric Carlson
Comment 5
2017-01-11 17:50:11 PST
(In reply to
comment #4
)
> Comment on
attachment 298603
[details]
> Proposed patch. > > View in context: >
https://bugs.webkit.org/attachment.cgi?id=298603&action=review
> > r=me with nits and comments. > > > Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaStreamAVFObjC.mm:156 > > +- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context > > +{ > > + UNUSED_PARAM(context); > > + UNUSED_PARAM(keyPath); > > + ASSERT(_parent); > > + > > + RetainPtr<WebAVSampleBufferStatusChangeListener> protectedSelf = self; > > + if ([object isKindOfClass:getAVSampleBufferDisplayLayerClass()]) { > > The way I like to do these KVO observer methods is like this: > > static void* DisplayLayerContext = &DisplayLayerContext; > static void* AudioRendererContext = &AudioRendererContext; > > ... > > if (context == DisplayLayerContext) { ... } else if (context == > AudioRendererContext) { ... }. > > Your way is totally fine too. >
I copy/pasted this from WebAVSampleBufferErrorListener in SourceBufferPrivateAVFObjC :-)
> > Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaStreamAVFObjC.mm:269 > > + MediaTime now = streamTime(); > > + while (!queue.isEmpty()) { > > + if (queue.first()->decodeTime() > now) > > + break; > > + queue.removeFirst(); > > + }; > > For SourceBuffer, we use a variant of SampleMap here, to ensure the samples > are dequeued in decode order. Specifically, we use a > DecodeOrderSampleMap::MapType, where MapType is a std::map from a > std::pair<> of DecodeTime, PresentationTime -> MediaTime. If you can > guarantee decode order insertion into this queue, you don't need it, but if > there's a chance that samples will come in out-of-order, you'll eventually > need something like DecodeOrderSampleMap (or to sort the queue). > > With a std::map, this would turn into: queue.erase(queue.start(), > queue.upper_bound({now, MediaTime::zeroTime()}); (Erase everything from the > beginning up until the first sample with a decode time greater than now.) >
Thanks, we don't need that yet but will soon. I will change in an upcoming patch.
> > Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaStreamAVFObjC.mm:319 > > + static const double audioRendererLatency = 0.2; > > + auto audioTrack = m_audioTrackMap.get(track.id()); > > + MediaTime timelineOffset = audioTrack->timelineOffset(); > > + if (timelineOffset == MediaTime::invalidTime()) { > > + timelineOffset = calculateTimelineOffset(sample, audioRendererLatency); > > + audioTrack->setTimelineOffset(timelineOffset); > > This... > > > Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaStreamAVFObjC.mm:351 > > + auto videoTrack = m_videoTrackMap.get(track.id()); > > + MediaTime timelineOffset = videoTrack->timelineOffset(); > > + if (timelineOffset == MediaTime::invalidTime()) { > > + timelineOffset = calculateTimelineOffset(sample, 0.01); > > And this. First off, we should probably document the 0.01s value (the same > way audioRendererLatency is documented by making it a static const). But > shouldn't the offset values be the same between the audio and video tracks? > Like, should this be stored on a per MediaPlayer basis rather than a > per-track basis, so that the audio and video doesn't get out of sync? >
Yes, there should be one value. We need to experiment to figure out what the right value is long term, but fixed for now.
> > Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaStreamAVFObjC.mm:455 > > + m_pendingAudioSampleQueue.clear(); > > + for (auto& renderer : m_audioRenderers.values()) { > > + [m_statusChangeListener stopObservingRenderer:renderer.get()]; > > + [renderer flush]; > > + [renderer stopRequestingMediaData]; > > + } > > + m_audioRenderers.clear(); > > We don't remove the renderers from the synchronizer here, but we do above in > destroyAudioRenderer(). Could we refactor this a bit to make a > destroyAudioRenderer(AVSampleBufferAudioRenderer*), and call that from both > destroyAudioRenderers() and destroyAudioRenderer(AtomicString) ? >
Fixed.
> > Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaStreamAVFObjC.mm:485 > > + if (status.integerValue == AVQueuedSampleBufferRenderingStatusRendering) > > + m_audioTrackMap.get(trackID)->setTimelineOffset(MediaTime::invalidTime()); > > I don't know that this is necessary. You'll probably get the KVO that > triggers this after you've already started pushing in data; it may cause > discontinuities. >
This only forces a recalculation of the offset, I don't think it should cause discontinuities.
> > Source/WebCore/platform/graphics/avfoundation/objc/MediaSampleAVFObjC.mm:44 > > +MediaTime MediaSampleAVFObjC::outputPresentationTime() const > > +{ > > + return toMediaTime(CMSampleBufferGetOutputPresentationTimeStamp(m_sample.get())); > > +} > > + > > I don't think Output Presentation Timestamps are necessary here. I doubt > we'll ever be dealing with samples that play backwards, or are trimmed.
> We can talk about this offline and revisit.
Eric Carlson
Comment 6
2017-01-11 17:58:29 PST
Created
attachment 298646
[details]
Patch for landing.
Eric Carlson
Comment 7
2017-01-11 19:50:09 PST
Created
attachment 298653
[details]
Updated patch for landing.
WebKit Commit Bot
Comment 8
2017-01-11 21:23:37 PST
Comment on
attachment 298653
[details]
Updated patch for landing. Clearing flags on attachment: 298653 Committed
r210621
: <
http://trac.webkit.org/changeset/210621
>
WebKit Commit Bot
Comment 9
2017-01-11 21:23:41 PST
All reviewed patches have been landed. Closing bug.
Eric Carlson
Comment 10
2017-01-12 05:58:14 PST
Plus
https://trac.webkit.org/r210631
to revert an accidental change.
Note
You need to
log in
before you can comment on or make changes to this bug.
Top of Page
Format For Printing
XML
Clone This Bug