NEW 110760
WebSpeech: need global speech controller
https://bugs.webkit.org/show_bug.cgi?id=110760
Summary WebSpeech: need global speech controller
Dominic Mazzoni
Reported 2013-02-25 08:12:26 PST
The current implementation of speech synthesis has a queue inside the SpeechSynthesis object that's owned by one DOMWindow. This isn't likely to work very well if multiple windows try to speak at the same time. I think that the queue needs to be pushed into the WebKit layer so that a multi-process browser can implement a single speech queue. I filed this bug against the speech API to clarify the exact semantics of what should happen if multiple windows try to speak, but I think that no matter how this is resolved, we'll want at least some global state. https://www.w3.org/Bugs/Public/show_bug.cgi?id=21110
Attachments
chris fleizach
Comment 1 2013-02-25 08:52:44 PST
(In reply to comment #0) > The current implementation of speech synthesis has a queue inside the SpeechSynthesis object that's owned by one DOMWindow. This isn't likely to work very well if multiple windows try to speak at the same time. > If multiple windows try talking at the same time, its unlikely the results will be good. A question that comes up is whether you want one window to know about speech synthesis usage in another window. at the same time, web synthesis will have no idea what's happening outside the browser, where there could also be something speaking. since you won't know the state outside the browser it didn't seem that useful to know the state outside the window either. > I think that the queue needs to be pushed into the WebKit layer so that a multi-process browser can implement a single speech queue. Why would it need to be in the WebKit layer? > > I filed this bug against the speech API to clarify the exact semantics of what should happen if multiple windows try to speak, but I think that no matter how this is resolved, we'll want at least some global state. > > https://www.w3.org/Bugs/Public/show_bug.cgi?id=21110
Dominic Mazzoni
Comment 2 2013-02-25 09:11:38 PST
(In reply to comment #1) > If multiple windows try talking at the same time, its unlikely the results will be good. A question that comes up is whether you want one window to know about speech synthesis usage in another window. It's not necessarily a bad experience. A page with multiple frames might want to let more than one frame talk, for example. A page that speaks the current time once an hour might coexist with another page that speaks more interactively; it seems fine for the current time to just enqueue its utterance. > at the same time, web synthesis will have no idea what's happening outside the browser, where there could also be something speaking. since you won't know the state outside the browser it didn't seem that useful to know the state outside the window either. It's true, if another app outside the browser is tying up speech, you might just get an error. > > I think that the queue needs to be pushed into the WebKit layer so that a multi-process browser can implement a single speech queue. > > Why would it need to be in the WebKit layer? I mean, to expose some APIs for the embedder to implement the queuing if it wants, rather than having it part of WebCore. Let's see what the consensus is on the spec and go from there.
Radar WebKit Bug Importer
Comment 3 2014-02-07 11:23:34 PST
Note You need to log in before you can comment on or make changes to this bug.