Bug 180522 - Web audio without audio output should not require any user gesture on iOS
Summary: Web audio without audio output should not require any user gesture on iOS
Status: NEW
Alias: None
Product: WebKit
Classification: Unclassified
Component: WebRTC (show other bugs)
Version: Safari 11
Hardware: iPhone / iPad iOS 11
: P2 Normal
Assignee: Nobody
URL:
Keywords: InRadar
Depends on:
Blocks:
 
Reported: 2017-12-06 22:41 PST by Adam
Modified: 2022-05-06 13:26 PDT (History)
7 users (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Adam 2017-12-06 22:41:06 PST
We use the following code to analyse the audio input level in a local MediaStream coming from getUserMedia. This works fine in Safari on a Mac but on iOS you get constant 0 (because timeDomainData[idx] always returns 128).

navigator.mediaDevices.getUserMedia({audio:true}).then(stream => {
  const context = new (window.AudioContext || window.webkitAudioContext)();
  const sourceNode = context.createMediaStreamSource(stream);
  const analyser = context.createAnalyser();
  sourceNode.connect(analyser);
  const timeDomainData = new Uint8Array(analyser.frequencyBinCount);

  setInterval(() => {
    analyser.getByteTimeDomainData(timeDomainData);
    
    let max = 0;
    for (let idx = 0; idx < timeDomainData.length; idx++) {
      max = Math.max(max, Math.abs(timeDomainData[idx] - 128));
    }
    audioLevel.innerHTML = (max / 128);
  }, 100);
}).catch(err => {
   alert(err.name + ' ' + err.message);
});


You can see it working at https://output.jsbin.com/rexolan
Comment 1 Adam 2017-12-06 22:49:15 PST
It also does not work for remote streams.
Comment 2 youenn fablet 2017-12-07 10:40:40 PST
(In reply to Adam from comment #1)
> It also does not work for remote streams.

On iOS, AudioContext needs a user gesture.
Can you retry by starting AudioContext as part of user gesture?
We should probably remove that restriction when getUserMedia is on.
Comment 3 Andrew Morris 2017-12-07 15:12:14 PST
> On iOS, AudioContext needs a user gesture.

Is it intended that AudioContext needs a user gesture even when there is no audio output? Being able to visualize the audio volume would be especially useful in exactly this use case when audio output is blocked.
Comment 4 youenn fablet 2017-12-07 15:23:03 PST
(In reply to Andrew Morris from comment #3)
> > On iOS, AudioContext needs a user gesture.
> 
> Is it intended that AudioContext needs a user gesture even when there is no
> audio output? Being able to visualize the audio volume would be especially
> useful in exactly this use case when audio output is blocked.

Good point, maybe the restriction should be targeted at the audio output.
Comment 5 Radar WebKit Bug Importer 2017-12-08 10:48:47 PST
<rdar://problem/35938085>
Comment 6 Radar WebKit Bug Importer 2017-12-08 10:48:49 PST
<rdar://problem/35938084>
Comment 7 Adam 2017-12-10 16:00:53 PST
Yes, it does work if we create the audio context when you click a button.
https://output.jsbin.com/juzufum

Like Andrew said though it would be great if that wasn't the case.
Comment 8 youenn fablet 2017-12-12 09:08:19 PST
bug 180680 is fixing the case of a page capturing data.
Let's keep this bug open for the wider question of allowing web audio analysis without user gesture
Comment 9 Adam 2017-12-12 17:57:41 PST
Fantastic, thanks
Comment 10 Dag-Inge Aas 2018-03-01 05:37:57 PST
We hit this bug as well, where we want to do an automated microphone check for users on their way into a conversation. Our code looks something like this:

function handleUserClick() {
    return mediaDevicesService
      .getUserMedia(constraints)
      .then(mediaStream => {
        this.setState({ mediaStream });
      })
      .then(() => enumerateDevices())
      .then(() => verifyMicrophoneWorks(this.state.mediaStream.stream))
      .then(isMicrophoneWorking =>
        this.props.updateMicStatus(isMicrophoneWorking)
      )
      .catch(error => <GetUserMediaErrorFeedback error={error} />);
}

This all happens in a single promise-chain, but because only the original button click is triggered as a user action, the audio context checking fails, even if all we do is check the getByteFrequencyData, and no audio is actually playing.

It would be great if we were allowed to play audio/handle AudioContext once getUserMedia permission is granted for the page for that session.
Comment 11 youenn fablet 2018-03-01 07:41:28 PST
daginge, I believe that would be addressed in bug 180680.
Can you try the latest iOS beta?
Comment 12 Dag-Inge Aas 2018-04-03 01:14:36 PDT
Sorry for the late reply youenn. We decided to refactor as more browsers are adopting autoplay restrictions now, and we don't want to take the chance that this causes issues in the future. All of our AudioContext's are triggered by a user action now.
Comment 13 youenn fablet 2018-04-03 08:36:07 PDT
(In reply to daginge from comment #12)
> Sorry for the late reply youenn. We decided to refactor as more browsers are
> adopting autoplay restrictions now, and we don't want to take the chance
> that this causes issues in the future. All of our AudioContext's are
> triggered by a user action now.

No problem daginge.
I believe we could tackle that issue by introducing some WebAudio specific constructs that would allow analyzing audio but not producing any audio.
In that case, we could bypass autoplay restrictions.
Comment 14 youenn fablet 2018-04-03 08:45:05 PDT
Maybe OfflineAudioContext is what we want.
Comment 15 youenn fablet 2018-04-03 10:32:02 PDT
(In reply to youenn fablet from comment #14)
> Maybe OfflineAudioContext is what we want.

Discussed with Jer and this is not designed for this use case.
Filed https://github.com/WebAudio/web-audio-api/issues/1551