Bug 177292 - Audio stream volume example doesn't work
Summary: Audio stream volume example doesn't work
Status: RESOLVED CONFIGURATION CHANGED
Alias: None
Product: WebKit
Classification: Unclassified
Component: WebRTC (show other bugs)
Version: WebKit Nightly Build
Hardware: Unspecified Unspecified
: P2 Normal
Assignee: Nobody
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2017-09-20 23:46 PDT by Ben
Modified: 2018-02-20 10:56 PST (History)
6 users (show)

See Also:


Attachments
Test case using Hark (2.28 KB, text/html)
2017-11-07 22:55 PST, Chad Phillips
no flags Details
Modified test (3.44 KB, text/html)
2017-11-15 12:06 PST, Eric Carlson
no flags Details
Modified Hark (9.06 KB, application/x-javascript)
2017-11-15 12:07 PST, Eric Carlson
no flags Details
Passing test case with adjusted Hark library. (2.83 KB, text/html)
2017-11-15 13:44 PST, Chad Phillips
no flags Details
Passing test case using AudioContext only (1.51 KB, text/html)
2017-11-15 13:46 PST, Chad Phillips
no flags Details

Note You need to log in before you can comment on or make changes to this bug.
Description Ben 2017-09-20 23:46:36 PDT
The audio stream volume example in webrtc/samples doesn't show the volume on iOS Safari 11.

Demo:
https://webrtc.github.io/samples/src/content/getusermedia/volume/

Source:
https://github.com/webrtc/samples/tree/gh-pages/src/content/getusermedia/volume
Comment 1 daniele.tagliavini 2017-10-10 03:09:28 PDT
Same problem for me in these demo.

Using webrtc from an Iphone with Safari 11 to a phone with Android chrome, audio is heard on Android and not on Iphone, while is ok on both side when using Android to Android.
Comment 2 Chad Phillips 2017-11-07 22:55:43 PST
Created attachment 326307 [details]
Test case using Hark

Same issue here. Attached is a test case I wrote using the popular Hark WebRTC speech detection library

This works fine on Chrome, Opera, Safari Desktop. On iOS Safari, volume is always 0.
Comment 3 Eric Carlson 2017-11-09 10:41:23 PST
An AudioContext can only be started from within a touchend handler on iOS. Add something like audioContext.startRendering() in a touchend event handler makes the attached test case work.
Comment 4 Ben 2017-11-09 13:33:45 PST
The user already allowed mic permission, why restrict AudioContext to a touchend handler? It doesn't add any security.
Comment 5 Eric Carlson 2017-11-09 14:46:33 PST
(In reply to Ben from comment #4)
> The user already allowed mic permission, why restrict AudioContext to a
> touchend handler? It doesn't add any security.

The user gave permission to capture from the microphone, not to use the speaker. This is not a new requirement, iOS has always required a user gesture to start an AudioContext or a media element.
Comment 6 Ben 2017-11-10 05:04:01 PST
In almost all cases, when you request a microphone permission, you also want to use the speakers. A microphone permission has a prominent UI, higher security implications and must be triggered by user interaction.

Requiring a developer to create AudioContext from touchend callback has no sense in this case. The fact that WebRTC developers are confused is the proof.
Comment 7 youenn fablet 2017-11-10 07:16:24 PST
When web page is capturing mic or camera, we should probably allow "autoplay" of web audio.
When web page is not capturing, we should stick with the current approach.
Comment 8 Chad Phillips 2017-11-10 14:29:57 PST
@youenn, I agree with your previous comment. As an application developer attempting to leverage WebRTC natively on iOS, I think the approach you suggest would both ease development and create a better user experience, without compromising security.

@Eric Carlson, would you be able to attach an update to my previous test case that passes on iOS currently? I've noticed a real dearth of examples and documentation for these newly available features, I'm sure other devs would appreciate a few of us beating down the weeds on this new trail... ;)
Comment 9 Ben 2017-11-14 23:56:00 PST
@youenn, should we reopen this issue?
Comment 10 philipp.weissensteiner 2017-11-15 05:35:44 PST
FWIW I agree with youenn and chad, adding audioContext.startRendering seems cumbersome and annoying after already prompting the user for microphone access.
Comment 11 youenn fablet 2017-11-15 07:25:34 PST
Let’s investigate this further
Comment 12 Chad Phillips 2017-11-15 10:42:36 PST
While this is in progress, can anybody share an actual working example of getting the microphone input currently?

@Eric Carlson suggested AudioContext.startRendering(), but it doesn't look to me like that's an actual method. OfflineaudioContext.startRendering() is, but I'm guessing that's not correct, either.

I've tried HTMLMediaElement.play() as well to no effect.
Comment 13 youenn fablet 2017-11-15 11:42:20 PST
You need a user gesture like a click on a button that will trigger a JS callback to start web audio rendering.
Comment 14 Eric Carlson 2017-11-15 12:06:07 PST
(In reply to Chad Phillips from comment #12)
> While this is in progress, can anybody share an actual working example of
> getting the microphone input currently?
> 
> @Eric Carlson suggested AudioContext.startRendering(), but it doesn't look
> to me like that's an actual method. OfflineaudioContext.startRendering() is,
> but I'm guessing that's not correct, either.
> 
> I've tried HTMLMediaElement.play() as well to no effect.

Sorry about that, startRendering is indeed on OfflineaudioContext. You can use AudioContext.resume to start playback. 

I have attached a modified version of your test case that adds a button which starts and stops the audio context. hark.js doesn't expose the audio context, so I modified it to add suspend and resume methods, a state property, and to emit an event when the context state changes.
Comment 15 Eric Carlson 2017-11-15 12:06:55 PST
Created attachment 327007 [details]
Modified test
Comment 16 Eric Carlson 2017-11-15 12:07:33 PST
Created attachment 327008 [details]
Modified Hark
Comment 17 Chad Phillips 2017-11-15 13:42:54 PST
@Eric Carlson, yes! Thanks for the clues, clear now.

Attaching a few passing test cases for reference in case others get stuck.

The mistake I was making was trying to make calls against the object returned by audioContext.createMediaStreamSource(stream), which is a MediaStreamAudioSourceNode, not the original audioContext object. Calling resume() on the audioContext object itself does the trick. It looks like the audioContext object is also accessible at MediaStreamAudioSourceNode object's 'context' property.
Comment 18 Chad Phillips 2017-11-15 13:44:53 PST
Created attachment 327019 [details]
Passing test case with adjusted Hark library.

Using @Eric Carlson's patched Hark library...
Comment 19 Chad Phillips 2017-11-15 13:46:40 PST
Created attachment 327020 [details]
Passing test case using AudioContext only
Comment 20 Ben 2017-12-15 00:10:03 PST
Is this issue fixed by https://bugs.webkit.org/show_bug.cgi?id=180680 ?
Also related: https://bugs.webkit.org/show_bug.cgi?id=180522
Comment 21 Tomas Roggero 2018-02-20 10:31:15 PST
I love you Ben (original opener)

This bug, even though hard to find, was the reason why all my scripts didn't work.

Just make sure you create the webkitAudioContext on a click action callback (no async allowed) and it's good.

Great. I now just hope the fix is backwards compatible.
Comment 22 Eric Carlson 2018-02-20 10:56:44 PST
Fixed by the changes for 180680.