Summary: | Distorted audio after getUserMedia when playing with AudioWorkletNode | ||||||
---|---|---|---|---|---|---|---|
Product: | WebKit | Reporter: | goldwaving <webkit> | ||||
Component: | Web Audio | Assignee: | Chris Dumez <cdumez> | ||||
Status: | RESOLVED FIXED | ||||||
Severity: | Major | CC: | cdumez, eric.carlson, jer.noble, tobias.hegemann, webkit-bug-importer, youennf | ||||
Priority: | P2 | Keywords: | InRadar | ||||
Version: | Safari 16 | ||||||
Hardware: | Unspecified | ||||||
OS: | iOS 16 | ||||||
Bug Depends on: | |||||||
Bug Blocks: | 235317 | ||||||
Attachments: |
|
Description
goldwaving
2023-01-24 09:08:51 PST
Created attachment 464823 [details]
Simple test to recreate the audio worklet bug on iOS
We also encountered this issue on iOS 16.2 It affects the whole web audio context or its nodes, e.g. OscillatorNode, AudioBufferSourceNode etc. It can be reproduced when using the first AudioWorklet inside an audio context. I've attached a simple test to reproduce and deployed it also on gh: https://delude88.github.io/ios-audioworklet-sample-rate/ Be aware: It is playing sinus waves at full volume. Besides: When closing and recreating another audio context, the issue is gone (until again the first AudioWorklet is used) Looking at https://goldwave.com/audio.html, the issue we experience is that the rendering quantum becomes 960 frames instead of 128 frames, which our code doesn't deal with well. Normally, as soon as WebAudio is in use, we request that CoreAudio uses a rendering quantum of 128. However, it is not working here and that's causing the issue. It is because of this logic: ``` void RemoteAudioSessionProxyManager::updatePreferredBufferSizeForProcess() { #if ENABLE(MEDIA_STREAM) if (CoreAudioCaptureSourceFactory::singleton().isAudioCaptureUnitRunning()) { CoreAudioCaptureSourceFactory::singleton().whenAudioCaptureUnitIsNotRunning([weakThis = WeakPtr { *this }] { if (weakThis) weakThis->updatePreferredBufferSizeForProcess(); }); return; } #endif // ... ``` If we're capturing (which we are here since we called getUserMedia), then we defer the setting of the preferred buffer size (128) until we're done capturing. Since capturing is ongoing, we just keep using 960, which breaks Web Audio. (In reply to Chris Dumez from comment #5) > It is because of this logic: > ``` > void RemoteAudioSessionProxyManager::updatePreferredBufferSizeForProcess() > { > #if ENABLE(MEDIA_STREAM) > if > (CoreAudioCaptureSourceFactory::singleton().isAudioCaptureUnitRunning()) { > > CoreAudioCaptureSourceFactory::singleton(). > whenAudioCaptureUnitIsNotRunning([weakThis = WeakPtr { *this }] { > if (weakThis) > weakThis->updatePreferredBufferSizeForProcess(); > }); > return; > } > #endif > // ... > ``` > > If we're capturing (which we are here since we called getUserMedia), then we > defer the setting of the preferred buffer size (128) until we're done > capturing. Since capturing is ongoing, we just keep using 960, which breaks > Web Audio. This is a regression from Youenn's Bug 235317. Pull request: https://github.com/WebKit/WebKit/pull/9711 Committed 259964@main (87395a602807): <https://commits.webkit.org/259964@main> Reviewed commits have been landed. Closing PR #9711 and removing active labels. |