Summary: | Audio stream volume example doesn't work | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | WebKit | Reporter: | Ben <ben.browitt> | ||||||||||||
Component: | WebRTC | Assignee: | Nobody <webkit-unassigned> | ||||||||||||
Status: | RESOLVED CONFIGURATION CHANGED | ||||||||||||||
Severity: | Normal | CC: | daniele.tagliavini, eric.carlson, philipp.weissensteiner, tomasloon, webkit, youennf | ||||||||||||
Priority: | P2 | ||||||||||||||
Version: | WebKit Nightly Build | ||||||||||||||
Hardware: | Unspecified | ||||||||||||||
OS: | Unspecified | ||||||||||||||
Attachments: |
|
Description
Ben
2017-09-20 23:46:36 PDT
Same problem for me in these demo. Using webrtc from an Iphone with Safari 11 to a phone with Android chrome, audio is heard on Android and not on Iphone, while is ok on both side when using Android to Android. Created attachment 326307 [details]
Test case using Hark
Same issue here. Attached is a test case I wrote using the popular Hark WebRTC speech detection library
This works fine on Chrome, Opera, Safari Desktop. On iOS Safari, volume is always 0.
An AudioContext can only be started from within a touchend handler on iOS. Add something like audioContext.startRendering() in a touchend event handler makes the attached test case work. The user already allowed mic permission, why restrict AudioContext to a touchend handler? It doesn't add any security. (In reply to Ben from comment #4) > The user already allowed mic permission, why restrict AudioContext to a > touchend handler? It doesn't add any security. The user gave permission to capture from the microphone, not to use the speaker. This is not a new requirement, iOS has always required a user gesture to start an AudioContext or a media element. In almost all cases, when you request a microphone permission, you also want to use the speakers. A microphone permission has a prominent UI, higher security implications and must be triggered by user interaction. Requiring a developer to create AudioContext from touchend callback has no sense in this case. The fact that WebRTC developers are confused is the proof. When web page is capturing mic or camera, we should probably allow "autoplay" of web audio. When web page is not capturing, we should stick with the current approach. @youenn, I agree with your previous comment. As an application developer attempting to leverage WebRTC natively on iOS, I think the approach you suggest would both ease development and create a better user experience, without compromising security. @Eric Carlson, would you be able to attach an update to my previous test case that passes on iOS currently? I've noticed a real dearth of examples and documentation for these newly available features, I'm sure other devs would appreciate a few of us beating down the weeds on this new trail... ;) @youenn, should we reopen this issue? FWIW I agree with youenn and chad, adding audioContext.startRendering seems cumbersome and annoying after already prompting the user for microphone access. Let’s investigate this further While this is in progress, can anybody share an actual working example of getting the microphone input currently? @Eric Carlson suggested AudioContext.startRendering(), but it doesn't look to me like that's an actual method. OfflineaudioContext.startRendering() is, but I'm guessing that's not correct, either. I've tried HTMLMediaElement.play() as well to no effect. You need a user gesture like a click on a button that will trigger a JS callback to start web audio rendering. (In reply to Chad Phillips from comment #12) > While this is in progress, can anybody share an actual working example of > getting the microphone input currently? > > @Eric Carlson suggested AudioContext.startRendering(), but it doesn't look > to me like that's an actual method. OfflineaudioContext.startRendering() is, > but I'm guessing that's not correct, either. > > I've tried HTMLMediaElement.play() as well to no effect. Sorry about that, startRendering is indeed on OfflineaudioContext. You can use AudioContext.resume to start playback. I have attached a modified version of your test case that adds a button which starts and stops the audio context. hark.js doesn't expose the audio context, so I modified it to add suspend and resume methods, a state property, and to emit an event when the context state changes. Created attachment 327007 [details]
Modified test
Created attachment 327008 [details]
Modified Hark
@Eric Carlson, yes! Thanks for the clues, clear now. Attaching a few passing test cases for reference in case others get stuck. The mistake I was making was trying to make calls against the object returned by audioContext.createMediaStreamSource(stream), which is a MediaStreamAudioSourceNode, not the original audioContext object. Calling resume() on the audioContext object itself does the trick. It looks like the audioContext object is also accessible at MediaStreamAudioSourceNode object's 'context' property. Created attachment 327019 [details]
Passing test case with adjusted Hark library.
Using @Eric Carlson's patched Hark library...
Created attachment 327020 [details]
Passing test case using AudioContext only
Is this issue fixed by https://bugs.webkit.org/show_bug.cgi?id=180680 ? Also related: https://bugs.webkit.org/show_bug.cgi?id=180522 I love you Ben (original opener) This bug, even though hard to find, was the reason why all my scripts didn't work. Just make sure you create the webkitAudioContext on a click action callback (no async allowed) and it's good. Great. I now just hope the fix is backwards compatible. Fixed by the changes for 180680. |