If audio is played with a connected AudioWorkletNode after calling mediaDevices.getUserMedia, audio is badly distorted. A demonstration of the problem is given here: https://goldwave.com/audio.html Choose the Get Media button, then choose the Play button. Audio is badly distorted. Reload the page, choose the Play button, audio is clear. This issue occurs on iOS devices and MacOS all fully updated (Safari 16.3)
<rdar://problem/104870451>
Created attachment 464823 [details] Simple test to recreate the audio worklet bug on iOS
We also encountered this issue on iOS 16.2 It affects the whole web audio context or its nodes, e.g. OscillatorNode, AudioBufferSourceNode etc. It can be reproduced when using the first AudioWorklet inside an audio context. I've attached a simple test to reproduce and deployed it also on gh: https://delude88.github.io/ios-audioworklet-sample-rate/ Be aware: It is playing sinus waves at full volume. Besides: When closing and recreating another audio context, the issue is gone (until again the first AudioWorklet is used)
Looking at https://goldwave.com/audio.html, the issue we experience is that the rendering quantum becomes 960 frames instead of 128 frames, which our code doesn't deal with well. Normally, as soon as WebAudio is in use, we request that CoreAudio uses a rendering quantum of 128. However, it is not working here and that's causing the issue.
It is because of this logic: ``` void RemoteAudioSessionProxyManager::updatePreferredBufferSizeForProcess() { #if ENABLE(MEDIA_STREAM) if (CoreAudioCaptureSourceFactory::singleton().isAudioCaptureUnitRunning()) { CoreAudioCaptureSourceFactory::singleton().whenAudioCaptureUnitIsNotRunning([weakThis = WeakPtr { *this }] { if (weakThis) weakThis->updatePreferredBufferSizeForProcess(); }); return; } #endif // ... ``` If we're capturing (which we are here since we called getUserMedia), then we defer the setting of the preferred buffer size (128) until we're done capturing. Since capturing is ongoing, we just keep using 960, which breaks Web Audio.
(In reply to Chris Dumez from comment #5) > It is because of this logic: > ``` > void RemoteAudioSessionProxyManager::updatePreferredBufferSizeForProcess() > { > #if ENABLE(MEDIA_STREAM) > if > (CoreAudioCaptureSourceFactory::singleton().isAudioCaptureUnitRunning()) { > > CoreAudioCaptureSourceFactory::singleton(). > whenAudioCaptureUnitIsNotRunning([weakThis = WeakPtr { *this }] { > if (weakThis) > weakThis->updatePreferredBufferSizeForProcess(); > }); > return; > } > #endif > // ... > ``` > > If we're capturing (which we are here since we called getUserMedia), then we > defer the setting of the preferred buffer size (128) until we're done > capturing. Since capturing is ongoing, we just keep using 960, which breaks > Web Audio. This is a regression from Youenn's Bug 235317.
Pull request: https://github.com/WebKit/WebKit/pull/9711
Committed 259964@main (87395a602807): <https://commits.webkit.org/259964@main> Reviewed commits have been landed. Closing PR #9711 and removing active labels.