Bug 190346 - Mixing media audio with call audio while in WebRTC call (AudioContext issue)
Summary: Mixing media audio with call audio while in WebRTC call (AudioContext issue)
Status: NEW
Alias: None
Product: WebKit
Classification: Unclassified
Component: WebRTC (show other bugs)
Version: WebKit Nightly Build
Hardware: iPhone / iPad iOS 11
: P2 Normal
Assignee: Nobody
URL:
Keywords: InRadar
Depends on:
Blocks:
 
Reported: 2018-10-08 04:52 PDT by Mike Block
Modified: 2021-10-05 04:22 PDT (History)
6 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Mike Block 2018-10-08 04:52:50 PDT
We have a use case where we play video content with an audio track from a file source in a web page, and then initiate a WebRTC call within this same context. Audio streams (both local and remote) are attached to the call and are turned on and off during the playback of the video file. The parties "punch in and out" to talk as needed.

Doing this confuses the AudioContext. It thinks it's in a video call, so it switches context to support a standard audio format and mixing for the call. However, introducing video file playback on top of these WebRTC streams causes the sound from the video file to be pushed into the background, and then erratically switches in volume with no discernible reason. The call also throws the audio into the earpiece, causing poor sound quality.

The AudioContext should allow blending of audio from multiple sources, and should allow playback from the internal speaker or default device. If this isn't the default setting, it should be controllable via JavaScript to allow better audio management. After all, this is one web page, not several apps competing for foreground audio on the device.
Comment 1 Radar WebKit Bug Importer 2018-10-08 08:37:46 PDT
<rdar://problem/45089200>
Comment 2 rykkers 2020-05-15 09:10:15 PDT
Did you ever find a workaround? Frustrating that i'm here nearly 2 years later witha similar issue.
All i've found is to temporarily request getUSerMedia without audio and switch out the WebRTC streams on the peer connection. That seems to yield ok results. But is obviously not ideal.

Would be better to just have proper contorl of the context, or better yet, have it not change automatically on you!
Comment 3 youenn fablet 2020-05-15 09:24:16 PDT
This might be a similar issue as https://bugs.webkit.org/show_bug.cgi?id=208134.

Is it iOS only or MacOS as well? Could you provide a repro case?
Comment 4 rykkers 2020-05-15 09:59:34 PDT
Just on iOS.

Here's a simple example: https://jsfiddle.net/2gh05w48/

Thanks
Comment 5 rykkers 2020-05-15 10:01:12 PDT
Obviously, to rerun the test you have to refresh the page, not just re-run the fiddle
Comment 6 David Gölzhäuser 2021-09-30 04:29:37 PDT
Do you play the audio separate from the video. I had a similar issue I got the audioTrack from the MediaStream and played it with an <audio> Tag. The Video was played with a <video> Tag.

I now simply attach the whole MediaStream on the <video> Tag, now it works.
Comment 7 David Gölzhäuser 2021-10-01 00:30:10 PDT
I need to correct my previous post. It is NOT working on iOS 15(.1 Beta).
Comment 8 David Gölzhäuser 2021-10-05 04:22:32 PDT
I figured out a temporary Workaround.

Here is my use case:
Cordova Based iOS Application which initially receives a VideoTrack from the RTCPeerConnection. Then the user can decide to start a voice call which asks the user for the microphone access and adds the Microphone track to the RTCPeerConnection, this will result in receiving the remote peers microphone stream. However with the drawback of the reduced volume due to the wrong speaker selection.

I worked around it by first add a MediaStream's AudioTrack (created with `audioContext.createMediaStreamDestination().stream`) to the RTCPeerConnection. Then when receiving the remote peers microphone stream the user is asked for the microphone permission using `getUserMedia`. When the user accepts and the microphone stream is available I simply replace the aforementioned track with the microphone track using RTCRtpSender function `replaceTrack`. (The `RTCRtpSender` is returned from the `RTCPeerConnection` method `addTrack`)

However this fix only works once per app livecycle. I am guessing that the `AVAudioSession` gets confused, I tried to reset it using a Cordova Plugin, but it didn't yield the wanted results.