WebKit Bugzilla
New
Browse
Log In
×
Sign in with GitHub
or
Remember my login
Create Account
·
Forgot Password
Forgotten password account recovery
RESOLVED CONFIGURATION CHANGED
177292
Audio stream volume example doesn't work
https://bugs.webkit.org/show_bug.cgi?id=177292
Summary
Audio stream volume example doesn't work
Ben
Reported
2017-09-20 23:46:36 PDT
The audio stream volume example in webrtc/samples doesn't show the volume on iOS Safari 11. Demo:
https://webrtc.github.io/samples/src/content/getusermedia/volume/
Source:
https://github.com/webrtc/samples/tree/gh-pages/src/content/getusermedia/volume
Attachments
Test case using Hark
(2.28 KB, text/html)
2017-11-07 22:55 PST
,
Chad Phillips
no flags
Details
Modified test
(3.44 KB, text/html)
2017-11-15 12:06 PST
,
Eric Carlson
no flags
Details
Modified Hark
(9.06 KB, application/x-javascript)
2017-11-15 12:07 PST
,
Eric Carlson
no flags
Details
Passing test case with adjusted Hark library.
(2.83 KB, text/html)
2017-11-15 13:44 PST
,
Chad Phillips
no flags
Details
Passing test case using AudioContext only
(1.51 KB, text/html)
2017-11-15 13:46 PST
,
Chad Phillips
no flags
Details
View All
Add attachment
proposed patch, testcase, etc.
daniele.tagliavini
Comment 1
2017-10-10 03:09:28 PDT
Same problem for me in these demo. Using webrtc from an Iphone with Safari 11 to a phone with Android chrome, audio is heard on Android and not on Iphone, while is ok on both side when using Android to Android.
Chad Phillips
Comment 2
2017-11-07 22:55:43 PST
Created
attachment 326307
[details]
Test case using Hark Same issue here. Attached is a test case I wrote using the popular Hark WebRTC speech detection library This works fine on Chrome, Opera, Safari Desktop. On iOS Safari, volume is always 0.
Eric Carlson
Comment 3
2017-11-09 10:41:23 PST
An AudioContext can only be started from within a touchend handler on iOS. Add something like audioContext.startRendering() in a touchend event handler makes the attached test case work.
Ben
Comment 4
2017-11-09 13:33:45 PST
The user already allowed mic permission, why restrict AudioContext to a touchend handler? It doesn't add any security.
Eric Carlson
Comment 5
2017-11-09 14:46:33 PST
(In reply to Ben from
comment #4
)
> The user already allowed mic permission, why restrict AudioContext to a > touchend handler? It doesn't add any security.
The user gave permission to capture from the microphone, not to use the speaker. This is not a new requirement, iOS has always required a user gesture to start an AudioContext or a media element.
Ben
Comment 6
2017-11-10 05:04:01 PST
In almost all cases, when you request a microphone permission, you also want to use the speakers. A microphone permission has a prominent UI, higher security implications and must be triggered by user interaction. Requiring a developer to create AudioContext from touchend callback has no sense in this case. The fact that WebRTC developers are confused is the proof.
youenn fablet
Comment 7
2017-11-10 07:16:24 PST
When web page is capturing mic or camera, we should probably allow "autoplay" of web audio. When web page is not capturing, we should stick with the current approach.
Chad Phillips
Comment 8
2017-11-10 14:29:57 PST
@youenn, I agree with your previous comment. As an application developer attempting to leverage WebRTC natively on iOS, I think the approach you suggest would both ease development and create a better user experience, without compromising security. @Eric Carlson, would you be able to attach an update to my previous test case that passes on iOS currently? I've noticed a real dearth of examples and documentation for these newly available features, I'm sure other devs would appreciate a few of us beating down the weeds on this new trail... ;)
Ben
Comment 9
2017-11-14 23:56:00 PST
@youenn, should we reopen this issue?
philipp.weissensteiner
Comment 10
2017-11-15 05:35:44 PST
FWIW I agree with youenn and chad, adding audioContext.startRendering seems cumbersome and annoying after already prompting the user for microphone access.
youenn fablet
Comment 11
2017-11-15 07:25:34 PST
Let’s investigate this further
Chad Phillips
Comment 12
2017-11-15 10:42:36 PST
While this is in progress, can anybody share an actual working example of getting the microphone input currently? @Eric Carlson suggested AudioContext.startRendering(), but it doesn't look to me like that's an actual method. OfflineaudioContext.startRendering() is, but I'm guessing that's not correct, either. I've tried HTMLMediaElement.play() as well to no effect.
youenn fablet
Comment 13
2017-11-15 11:42:20 PST
You need a user gesture like a click on a button that will trigger a JS callback to start web audio rendering.
Eric Carlson
Comment 14
2017-11-15 12:06:07 PST
(In reply to Chad Phillips from
comment #12
)
> While this is in progress, can anybody share an actual working example of > getting the microphone input currently? > > @Eric Carlson suggested AudioContext.startRendering(), but it doesn't look > to me like that's an actual method. OfflineaudioContext.startRendering() is, > but I'm guessing that's not correct, either. > > I've tried HTMLMediaElement.play() as well to no effect.
Sorry about that, startRendering is indeed on OfflineaudioContext. You can use AudioContext.resume to start playback. I have attached a modified version of your test case that adds a button which starts and stops the audio context. hark.js doesn't expose the audio context, so I modified it to add suspend and resume methods, a state property, and to emit an event when the context state changes.
Eric Carlson
Comment 15
2017-11-15 12:06:55 PST
Created
attachment 327007
[details]
Modified test
Eric Carlson
Comment 16
2017-11-15 12:07:33 PST
Created
attachment 327008
[details]
Modified Hark
Chad Phillips
Comment 17
2017-11-15 13:42:54 PST
@Eric Carlson, yes! Thanks for the clues, clear now. Attaching a few passing test cases for reference in case others get stuck. The mistake I was making was trying to make calls against the object returned by audioContext.createMediaStreamSource(stream), which is a MediaStreamAudioSourceNode, not the original audioContext object. Calling resume() on the audioContext object itself does the trick. It looks like the audioContext object is also accessible at MediaStreamAudioSourceNode object's 'context' property.
Chad Phillips
Comment 18
2017-11-15 13:44:53 PST
Created
attachment 327019
[details]
Passing test case with adjusted Hark library. Using @Eric Carlson's patched Hark library...
Chad Phillips
Comment 19
2017-11-15 13:46:40 PST
Created
attachment 327020
[details]
Passing test case using AudioContext only
Ben
Comment 20
2017-12-15 00:10:03 PST
Is this issue fixed by
https://bugs.webkit.org/show_bug.cgi?id=180680
? Also related:
https://bugs.webkit.org/show_bug.cgi?id=180522
Tomas Roggero
Comment 21
2018-02-20 10:31:15 PST
I love you Ben (original opener) This bug, even though hard to find, was the reason why all my scripts didn't work. Just make sure you create the webkitAudioContext on a click action callback (no async allowed) and it's good. Great. I now just hope the fix is backwards compatible.
Eric Carlson
Comment 22
2018-02-20 10:56:44 PST
Fixed by the changes for 180680.
Note
You need to
log in
before you can comment on or make changes to this bug.
Top of Page
Format For Printing
XML
Clone This Bug