Allow AudioContext to start when getUserMedia is on
Created attachment 329057 [details] Patch
Comment on attachment 329057 [details] Patch Clearing flags on attachment: 329057 Committed r225785: <https://trac.webkit.org/changeset/225785>
All reviewed patches have been landed. Closing bug.
<rdar://problem/35995701>
With this fix we could autoplay a video element that has audio as long as the user previously approved a getUserMedia request. In current Safari stable on desktop it's not enough to approve a getUserMedia request in the beginning of the session. The user has to actively capture mic or webcam to make video autoplay work. This a regression. In our app we have one user broadcasting and several viewers. Capturing their mic just to make autoplay work doesn't make sense.
> In current Safari stable on desktop it's not enough to approve a > getUserMedia request in the beginning of the session. The user has to > actively capture mic or webcam to make video autoplay work. > This a regression. In our app we have one user broadcasting and several > viewers. Capturing their mic just to make autoplay work doesn't make sense. The principle is that a user should make a gesture to activate sound. It can be the getUserMedia prompt, it can also be a click on a video element, play button, "activate sound" button. Once a page is producing audio content, other video elements should autoplay. I am not sure what your exact request is and what the regression you are pointing at is.
I'm creating AudioContext and playing it as a response of user gesture. I can hear the noise from the AudioContext but later when I'm trying to autoplay a video element with audio it is muted unless I'm actively capturing the local mic/cam. This is the callback of click event to enable audio: enableAudio() { let audioContext = 'AudioContext' in window ? new AudioContext() : new window.webkitAudioContext(); // create 2 seconds buffer let buffer = audioContext.createBuffer(2, audioContext.sampleRate*2, audioContext.sampleRate); // create noise for (var channel = 0; channel < 2; channel++) { // This gives us the actual ArrayBuffer that contains the data var nowBuffering = buffer.getChannelData(channel); for (var i = 0; i < audioContext.sampleRate*2; i++) { // Math.random() is in [0; 1.0] // audio needs to be in [-1.0; 1.0] nowBuffering[i] = Math.random() * 2 - 1; } } let source = audioContext.createBufferSource(); source.buffer = buffer; source.connect(audioContext.destination); source.start(0); // create a PeerConnection and try to autoplay remote video+audio }
To summarize, your issue is: - AudioContext is started on user click and produces audio - video element is being added later on and will not autoplay even though web audio is producing audio There are two workarounds I can think of right now: - play the audio of the video element through AudioContext instead of video elements - When AudioContext is being clicked, call play() on the video element
Thank you for the workaround. Mixing all the audio and playing with a single AudioContext work with WebRTC streams but I think will break lip sync. It also doesn't help with autoplay HLS and YouTube videos on the web conference.