Speech JavaScriptAPI: SpeechRecognition, Controller and Client
Created attachment 131830 [details] Patch
I've uploaded SpeechRecognition, the Controller and Client in the same patch because I figured that makes it easier to see how they relate to each other. Happy to break them up into three different patches if anyone prefers. Any pointers where I can learn more about ActiveDOMObject would also be appreciated. At the moment I only have a vague idea that SpeechRecognition needs to be an ActiveDOMObject so that it won't get garbage-collected when there are pending tasks, but I figure there's more to know? Please take a look. Thanks.
Comment on attachment 131830 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=131830&action=review I'm marking this patch R+, but you seem to have a bunch of unnecessary boilerplate in SpeechRecognition. Maybe there's a reason for it that I don't understand? Please consider the comments below before landing. Thanks! > Source/WebCore/Modules/speech/SpeechRecognition.cpp:68 > +void SpeechRecognition::audioStartCallback() > +{ > + fireAudioStart(); > +} What's the point of having these thunks? Why not just have the SpeechRecognitionClient call dispatchAudioStart directly? > Source/WebCore/Modules/speech/SpeechRecognition.cpp:135 > +bool SpeechRecognition::canSuspend() const > +{ > + // FIXME: Implement! > + return false; > +} > + > +void SpeechRecognition::stop() > +{ > + // FIXME: Implement! > +} > + > +bool SpeechRecognition::hasPendingActivity() const > +{ > + // FIXME: Implement! > + return false; > +} Do you need to override these? Why not just inherit? > Source/WebCore/Modules/speech/SpeechRecognition.cpp:146 > +ScriptExecutionContext* SpeechRecognition::scriptExecutionContext() const > +{ > + return ActiveDOMObject::scriptExecutionContext(); > +} > + Do you need to override this? Why not just inherit? > Source/WebCore/Modules/speech/SpeechRecognition.cpp:167 > + m_controller->unregisterSpeechRecognition(this); I was expecting a corresponding registerSpeechRecognition call. Is that not needed? Please consider unregistering yourself in SpeechRecognition::stop() instead. SpeechRecognition::stop() is called at a deterministic time, but SpeechRecognition::~SpeechRecognition is call non-deterministically by the garbage collector. Generally, we prefer web-facing APIs to behave as deterministically as possible so we don't get content that expects the current quirks of a garbage collector. > Source/WebCore/Modules/speech/SpeechRecognition.cpp:170 > +void SpeechRecognition::fireAudioStart() Normally we'd call these functions "dispatchAudioStart".
(In reply to comment #3) > (From update of attachment 131830 [details]) > View in context: https://bugs.webkit.org/attachment.cgi?id=131830&action=review > > I'm marking this patch R+, but you seem to have a bunch of unnecessary boilerplate in SpeechRecognition. Maybe there's a reason for it that I don't understand? Please consider the comments below before landing. Thanks! Thanks very much for the review! Re-uploading so you can take a look and let me know if my responses below make sense; there's no need to rush this. > > > Source/WebCore/Modules/speech/SpeechRecognition.cpp:68 > > +void SpeechRecognition::audioStartCallback() > > +{ > > + fireAudioStart(); > > +} > > What's the point of having these thunks? Why not just have the SpeechRecognitionClient call dispatchAudioStart directly? I don't want the class that implements SpeechRecognitionClient (for Chromium, it will be a proxy object forwarding calls to/from Chromium) to be concerned about details in WebCore such as dispatching events or the internal state of the SpeechRecognition object, etc. I think it's much cleaner that when the client has some result it calls back to speechRecognition->resultCallback(), and that's it. I can see that it looks weird to have so many almost empty functions, though. I'll move the code to dispatch events into these callbacks, since they don't do much else at the moment. > > Source/WebCore/Modules/speech/SpeechRecognition.cpp:135 > > +bool SpeechRecognition::canSuspend() const > > +{ > > + // FIXME: Implement! > > + return false; > > +} > > + > > +void SpeechRecognition::stop() > > +{ > > + // FIXME: Implement! > > +} > > + > > +bool SpeechRecognition::hasPendingActivity() const > > +{ > > + // FIXME: Implement! > > + return false; > > +} > > Do you need to override these? Why not just inherit? I thought the point of ActiveDOMObject was that I should override hasPendingActivity() to return true when the recognition was started and we were waiting for callbacks to prevent garbage collecting the object. I see now that maybe what I should do is call into ActiveDOMObject::setPendingActivity() / unsetPendingActivity() instead? Removing these functions. > > > Source/WebCore/Modules/speech/SpeechRecognition.cpp:146 > > +ScriptExecutionContext* SpeechRecognition::scriptExecutionContext() const > > +{ > > + return ActiveDOMObject::scriptExecutionContext(); > > +} > > + > > Do you need to override this? Why not just inherit? Yes, this is overriding EventTarget::scriptExecutionContext(), which is pure virtual. > > > Source/WebCore/Modules/speech/SpeechRecognition.cpp:167 > > + m_controller->unregisterSpeechRecognition(this); > > I was expecting a corresponding registerSpeechRecognition call. Is that not needed? The object is registered with the client when calling start(). > > Please consider unregistering yourself in SpeechRecognition::stop() instead. SpeechRecognition::stop() is called at a deterministic time, but SpeechRecognition::~SpeechRecognition is call non-deterministically by the garbage collector. Generally, we prefer web-facing APIs to behave as deterministically as possible so we don't get content that expects the current quirks of a garbage collector. Yes, unregistering at a deterministic time sounds desirable. stop() is probably not the right time to do it, since the object would still be receiving events after that (onend). Since the object will need to keep track of whether or not it expects events anyway, for the sake of ActiveDOMObject, it should just unregister itself when it no longer expects events. > > > Source/WebCore/Modules/speech/SpeechRecognition.cpp:170 > > +void SpeechRecognition::fireAudioStart() > > Normally we'd call these functions "dispatchAudioStart". I've removed these as per my comment above.
Created attachment 132047 [details] Patch
> > Do you need to override these? Why not just inherit? > > I thought the point of ActiveDOMObject was that I should override hasPendingActivity() to return true when the recognition was started and we were waiting for callbacks to prevent garbage collecting the object. > > I see now that maybe what I should do is call into ActiveDOMObject::setPendingActivity() / unsetPendingActivity() instead? Removing these functions. Yeah, or you can override these functions later when you're actually implementing that functionality. As-is, these were just stubs. > > > Source/WebCore/Modules/speech/SpeechRecognition.cpp:146 > > > +ScriptExecutionContext* SpeechRecognition::scriptExecutionContext() const > > > +{ > > > + return ActiveDOMObject::scriptExecutionContext(); > > > +} > > > + > > > > Do you need to override this? Why not just inherit? > > Yes, this is overriding EventTarget::scriptExecutionContext(), which is pure virtual. Ah, makes sense.
Comment on attachment 132047 [details] Patch Thanks.
Comment on attachment 132047 [details] Patch Clearing flags on attachment: 132047 Committed r110950: <http://trac.webkit.org/changeset/110950>
All reviewed patches have been landed. Closing bug.