WebKit Bugzilla
New
Browse
Log In
×
Sign in with GitHub
or
Remember my login
Create Account
·
Forgot Password
Forgotten password account recovery
NEW
253952
AudioContext will play with 20ms callback only a MediaStream and AudioWorkletNode in graph
https://bugs.webkit.org/show_bug.cgi?id=253952
Summary
AudioContext will play with 20ms callback only a MediaStream and AudioWorklet...
wangweisi.night12138
Reported
2023-03-15 02:50:31 PDT
Created
attachment 465444
[details]
reproduction tiny example: index.html ```html <!DOCTYPE html> <html> <script type="application/javascript"> class MyAudioNode extends AudioWorkletNode { constructor(ctx) { super(ctx, "my-worklet-processor"); } } (async () => { const ctx = new AudioContext(); await ctx.audioWorklet.addModule('bypass.worklet.js'); const node = new MyAudioNode(ctx); const stream = await navigator.mediaDevices.getUserMedia({ audio: true }); const source = ctx.createMediaStreamSource(stream); source.connect(node).connect(ctx.destination); ctx.resume(); })(); </script> </html> ``` bypass.worklet.js ```js class MyWorkletProcessor extends AudioWorkletProcessor { constructor() { super(); } process(inputs, outputs) { // Use the 1st input and output only to make the example simpler. |input| // and |output| here have the similar structure with the AudioBuffer // interface. (i.e. An array of Float32Array) const input = inputs[0]; if (!(input?.length)) return true; const output = outputs[0]; if (!(output?.length)) return true; // Copy-in, process and copy-out. output.forEach((e, i) => e.set(input[i % input.length])); return true; } } registerProcessor("my-worklet-processor", MyWorkletProcessor); ``` bug occurs in code `Source/WebCore/platform/audio/cocoa/MediaSessionManagerCocoa.mm:168` ```cpp size_t bufferSize = m_defaultBufferSize; if (webAudioCount) bufferSize = AudioUtilities::renderQuantumSize; else if (captureCount || audioMediaStreamTrackCount) { // In case of audio capture or audio MediaStreamTrack playing, we want to grab 20 ms chunks to limit the latency so that it is not noticeable by users // while having a large enough buffer so that the audio rendering remains stable, hence a computation based on sample rate. bufferSize = WTF::roundUpToPowerOfTwo(AudioSession::sharedSession().sampleRate() / 50); } else if (m_supportedAudioHardwareBufferSizes && DeprecatedGlobalSettings::lowPowerVideoAudioBufferSizeEnabled()) bufferSize = m_supportedAudioHardwareBufferSizes.nearest(kLowPowerVideoBufferSize); AudioSession::sharedSession().setPreferredBufferSize(bufferSize); ```
Attachments
reproduction
(534.44 KB, application/zip)
2023-03-15 02:50 PDT
,
wangweisi.night12138
no flags
Details
View All
Add attachment
proposed patch, testcase, etc.
wangweisi.night12138
Comment 1
2023-03-15 03:01:34 PDT
(In reply to wangweisi.night12138 from
comment #0
)
> Created
attachment 465444
[details]
> reproduction > > tiny example: > index.html > ```html > <!DOCTYPE html> > <html> > <script type="application/javascript"> > class MyAudioNode extends AudioWorkletNode { > constructor(ctx) { > super(ctx, "my-worklet-processor"); > } > } > (async () => { > const ctx = new AudioContext(); > await ctx.audioWorklet.addModule('bypass.worklet.js'); > const node = new MyAudioNode(ctx); > const stream = await navigator.mediaDevices.getUserMedia({ audio: > true }); > const source = ctx.createMediaStreamSource(stream); > source.connect(node).connect(ctx.destination); > ctx.resume(); > })(); > </script> > > </html> > ``` > bypass.worklet.js > ```js > class MyWorkletProcessor extends AudioWorkletProcessor { > constructor() { > super(); > } > > process(inputs, outputs) { > // Use the 1st input and output only to make the example simpler. > |input| > // and |output| here have the similar structure with the AudioBuffer > // interface. (i.e. An array of Float32Array) > const input = inputs[0]; > if (!(input?.length)) return true; > const output = outputs[0]; > if (!(output?.length)) return true; > > // Copy-in, process and copy-out. > output.forEach((e, i) => e.set(input[i % input.length])); > return true; > } > } > > registerProcessor("my-worklet-processor", MyWorkletProcessor); > > > ``` > > bug occurs in code > `Source/WebCore/platform/audio/cocoa/MediaSessionManagerCocoa.mm:168` > ```cpp > size_t bufferSize = m_defaultBufferSize; > if (webAudioCount) > bufferSize = AudioUtilities::renderQuantumSize; > else if (captureCount || audioMediaStreamTrackCount) { > // In case of audio capture or audio MediaStreamTrack playing, we > want to grab 20 ms chunks to limit the latency so that it is not noticeable > by users > // while having a large enough buffer so that the audio rendering > remains stable, hence a computation based on sample rate. > bufferSize = > WTF::roundUpToPowerOfTwo(AudioSession::sharedSession().sampleRate() / 50); > } else if (m_supportedAudioHardwareBufferSizes && > DeprecatedGlobalSettings::lowPowerVideoAudioBufferSizeEnabled()) > bufferSize = > m_supportedAudioHardwareBufferSizes.nearest(kLowPowerVideoBufferSize); > > AudioSession::sharedSession().setPreferredBufferSize(bufferSize); > ```
More details: When webkit creates an audio callback, `webAudioCount` is 0, so webkit applies to the system for a callback of `sampleRate() / 50` size, but after the AudioContext is created, cocoa requests data with a size of 128 samples, Causes the callback size to be misaligned and glitch on speakers.
Radar WebKit Bug Importer
Comment 2
2023-03-22 02:51:14 PDT
<
rdar://problem/107047061
>
Note
You need to
log in
before you can comment on or make changes to this bug.
Top of Page
Format For Printing
XML
Clone This Bug