The next part of this is basic speaking behavior and event dispatch for start/end/error
Created attachment 183267 [details] patch
Comment on attachment 183267 [details] patch Unofficial review. View in context: https://bugs.webkit.org/attachment.cgi?id=183267&action=review > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:48 > + m_isSpeaking = true; I don't see where you set m_isSpeaking to false when the last utterance finishes. Could you cover that in tests? > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:59 > + // Otherwise, append to the queue to wait to be processed. I think you meant to have a return before this next line? It looks like it will speak immediately and also append to the queue. Can you test for that too? > Source/WebCore/Modules/speech/SpeechSynthesis.h:63 > + void platformFinishedSpeaking(SpeechSynthesisUtterance*, bool successful); Mac returns a bool true/false in its "finished speaking" callback, but other platforms don't necessarily work that way. To better match the spec, how about a separate error method: void platformStartedSpeaking(SpeechSynthesisUtterance*); void platformFinishedSpeaking(SpeechSynthesisUtterance*); void platformError(SpeechSynthesisUtterance*); Note that according to the spec, both the "end" and "error" events mean that that particular utterance is finished. The spec doesn't have an error message, maybe we should add one? The speech recognition part has an enum. > LayoutTests/platform/mac/fast/speechsynthesis/speech-synthesis-speak.html:20 > + var u = new SpeechSynthesisUtterance(" "); This is fine for now, but I think that speech will have to be mocked out in order for these tests to be comprehensive but not annoy WebKit developers. Even just speaking silence could potentially mess up someone running a screen reader or synthesizing the daily news while they're running layout tests in the background.
(In reply to comment #2) > (From update of attachment 183267 [details]) > Unofficial review. > > View in context: https://bugs.webkit.org/attachment.cgi?id=183267&action=review > > > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:48 > > + m_isSpeaking = true; > > I don't see where you set m_isSpeaking to false when the last utterance finishes. Could you cover that in tests? > good catch. i had this before but most have gotten lost > > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:59 > > + // Otherwise, append to the queue to wait to be processed. > > I think you meant to have a return before this next line? It looks like it will speak immediately and also append to the queue. Can you test for that too? I think this is correct. The queue maintains all speaking jobs, whether they're in flight or not. So we want to start speaking right away and also add it to our queue. However, we should probably add it to the queue FIRST in case there's some immediate feedback loop where it thinks the queue is empty. > > > Source/WebCore/Modules/speech/SpeechSynthesis.h:63 > > + void platformFinishedSpeaking(SpeechSynthesisUtterance*, bool successful); > > Mac returns a bool true/false in its "finished speaking" callback, but other platforms don't necessarily work that way. To better match the spec, how about a separate error method: > > void platformStartedSpeaking(SpeechSynthesisUtterance*); > void platformFinishedSpeaking(SpeechSynthesisUtterance*); > void platformError(SpeechSynthesisUtterance*); > > Note that according to the spec, both the "end" and "error" events mean that that particular utterance is finished. Good idea > > The spec doesn't have an error message, maybe we should add one? The speech recognition part has an enum. I haven't seen speech synths give error messages. Either it's stopped prematurely or it finishes. I don't know if there's any other cases that are meaningful. > > > LayoutTests/platform/mac/fast/speechsynthesis/speech-synthesis-speak.html:20 > > + var u = new SpeechSynthesisUtterance(" "); > > This is fine for now, but I think that speech will have to be mocked out in order for these tests to be comprehensive but not annoy WebKit developers. Even just speaking silence could potentially mess up someone running a screen reader or synthesizing the daily news while they're running layout tests in the background. Yea, I agree with this. We probably want something in DRT to notify that we want "fake" speech events
(In reply to comment #3) > > I think you meant to have a return before this next line? It looks like it will speak immediately and also append to the queue. Can you test for that too? > > I think this is correct. The queue maintains all speaking jobs, whether they're in flight or not. So we want to start speaking right away and also add it to our queue. However, we should probably add it to the queue FIRST in case there's some immediate feedback loop where it thinks the queue is empty. Oh, in that case maybe you don't need m_isSpeaking - just make a helper function that returns true if the queue is nonempty. > I haven't seen speech synths give error messages. Either it's stopped prematurely or it finishes. I don't know if there's any other cases that are meaningful. There might be errors like "invalid voice" or "speech synthesizer busy" on some platforms, and those errors might be generated asynchronously. But it's not a big deal for now.
Created attachment 183280 [details] patch
Created attachment 183281 [details] patch
(In reply to comment #4) > (In reply to comment #3) > > > I think you meant to have a return before this next line? It looks like it will speak immediately and also append to the queue. Can you test for that too? > > > > I think this is correct. The queue maintains all speaking jobs, whether they're in flight or not. So we want to start speaking right away and also add it to our queue. However, we should probably add it to the queue FIRST in case there's some immediate feedback loop where it thinks the queue is empty. > > Oh, in that case maybe you don't need m_isSpeaking - just make a helper function that returns true if the queue is nonempty. > > > I haven't seen speech synths give error messages. Either it's stopped prematurely or it finishes. I don't know if there's any other cases that are meaningful. > > There might be errors like "invalid voice" or "speech synthesizer busy" on some platforms, and those errors might be generated asynchronously. But it's not a big deal for now. Good point. We should probably raise a spec issue
Comment on attachment 183281 [details] patch Unofficial review, looks good.
Comment on attachment 183281 [details] patch View in context: https://bugs.webkit.org/attachment.cgi?id=183281&action=review > Source/WebCore/Modules/speech/mac/SpeechSynthesisMac.mm:35 > +@interface WebSpeechSynthesisWrapper : NSObject<NSSpeechSynthesizerDelegate> Based on the discussion in Bug 107382, we thought we should fix bug 107414 before elaborating these misplaced / insufficiently abstracted files. Would you be willing to fix bug 107414 first?
(In reply to comment #9) > (From update of attachment 183281 [details]) > View in context: https://bugs.webkit.org/attachment.cgi?id=183281&action=review > > > Source/WebCore/Modules/speech/mac/SpeechSynthesisMac.mm:35 > > +@interface WebSpeechSynthesisWrapper : NSObject<NSSpeechSynthesizerDelegate> > > Based on the discussion in Bug 107382, we thought we should fix bug 107414 before elaborating these misplaced / insufficiently abstracted files. Would you be willing to fix bug 107414 first? Can you point me to an example of this separation? Right now, I'm following what we do with AXObjectCache... That is the webby/core aspects are in AXObjectCache.cpp and then the platform methods that are needed are declared in AXObjectCache.h, but are relied upon to be implemented in specific platform files, like AXObjectCacheMac.mm Similar type of thing happening with SpeechSynthesis now, except the platform file is called SpeechSynthesisMac. I'm still working through where which piece of the puzzle goes where, but this patch I believe provides the right abstraction/separation that we desire. Thanks
Comment on attachment 183281 [details] patch Attachment 183281 [details] did not pass cr-linux-debug-ews (chromium-xvfb): Output: http://queues.webkit.org/results/16151885
Created attachment 187465 [details] patch
Attachment 187465 [details] did not pass style-queue: Failed to run "['Tools/Scripts/check-webkit-style', '--diff-files', u'LayoutTests/ChangeLog', u'LayoutTests/platform/mac/fast/speechsynthesis/speech-synthesis-speak-expected.txt', u'LayoutTests/platform/mac/fast/speechsynthesis/speech-synthesis-speak.html', u'Source/WebCore/ChangeLog', u'Source/WebCore/Modules/speech/SpeechSynthesis.cpp', u'Source/WebCore/Modules/speech/SpeechSynthesis.h', u'Source/WebCore/Modules/speech/SpeechSynthesisEvent.cpp', u'Source/WebCore/Modules/speech/SpeechSynthesisEvent.h', u'Source/WebCore/Modules/speech/SpeechSynthesisUtterance.h', u'Source/WebCore/platform/PlatformSpeechSynthesisUtterance.cpp', u'Source/WebCore/platform/PlatformSpeechSynthesisUtterance.h', u'Source/WebCore/platform/PlatformSpeechSynthesizer.cpp', u'Source/WebCore/platform/PlatformSpeechSynthesizer.h', u'Source/WebCore/platform/mac/PlatformSpeechSynthesizerMac.mm']" exit_code: 1 Source/WebCore/Modules/speech/SpeechSynthesis.cpp:129: Weird number of spaces at line-start. Are you using a 4-space indent? [whitespace/indent] [3] Total errors found: 1 in 14 files If any of these errors are false positives, please file a bug against check-webkit-style.
Comment on attachment 187465 [details] patch View in context: https://bugs.webkit.org/attachment.cgi?id=187465&action=review > Source/WebCore/platform/PlatformSpeechSynthesizer.h:61 > + PlatformSpeechSynthesizerClient *client() const { return m_speechSynthesizerClient; } * on the wrong side. > Source/WebCore/platform/mac/PlatformSpeechSynthesizerMac.mm:162 > + m_platformSpeechWrapper = RetainPtr<id>(AdoptNS, [[WebSpeechSynthesisWrapper alloc] initWithSpeechSynthesizer:this]); You should use the adoptNS() function here. > Source/WebCore/platform/PlatformSpeechSynthesisUtterance.h:57 > + void setVolume(float volume) { m_volume = std::max(std::min(0.f, volume), 1.f); } Please use 0.0f and 1.0f (or, if it workse, just 0 and 1). > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:127 > + if (m_utteranceQueue.size()) > + startSpeakingImmediately(m_utteranceQueue.last().get()); Should this be taking the first one instead? > Source/WebCore/Modules/speech/SpeechSynthesis.h:66 > + virtual void didStartSpeaking(const PlatformSpeechSynthesisUtterance*); > + virtual void didFinishSpeaking(const PlatformSpeechSynthesisUtterance*); > + virtual void speakingErrorOccurred(const PlatformSpeechSynthesisUtterance*); These should have OVERRIDE on them.
Created attachment 187495 [details] patch Thanks for the review. Good catch on the first vs. last for the next speech job
Comment on attachment 187495 [details] patch View in context: https://bugs.webkit.org/attachment.cgi?id=187495&action=review > Source/WebCore/platform/mac/PlatformSpeechSynthesizerMac.mm:41 > + NSSpeechSynthesizer *m_synthesizer; If you use a RetainPtr<> here you, don't need the dealloc. > Source/WebCore/platform/mac/PlatformSpeechSynthesizerMac.mm:88 > + WTF::Vector<WTF::RefPtr<WebCore::PlatformSpeechSynthesisVoice> > voiceList = m_synthesizerObject->voiceList(); The WTF::'s here are not necessary. > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:90 > + // If the queue is empty, speak this immediately and add it to the queue. s/is/was/ > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:128 > +void SpeechSynthesis::handleSpeakingCompleted(SpeechSynthesisUtterance* utterance, bool errorOccurred) > +{ > + ASSERT(utterance); > + > + m_isSpeaking = false; > + > + fireEvent(errorOccurred ? eventNames().errorEvent : eventNames().endEvent, utterance, 0, String()); > + > + size_t utteranceIndex = m_utteranceQueue.find(utterance); > + ASSERT(utteranceIndex != WTF::notFound); > + if (utteranceIndex != WTF::notFound) > + m_utteranceQueue.remove(utteranceIndex); > + > + // Start the next job if there is one pending. > + if (m_utteranceQueue.size()) > + startSpeakingImmediately(m_utteranceQueue.first().get()); > +} Can you have more than one utterance going at once? If so, what is preventing the first utterance in the queue from being passed to startSpeakingImmediately multiple times?
(In reply to comment #16) > (From update of attachment 187495 [details]) > View in context: https://bugs.webkit.org/attachment.cgi?id=187495&action=review > > > Source/WebCore/platform/mac/PlatformSpeechSynthesizerMac.mm:41 > > + NSSpeechSynthesizer *m_synthesizer; > > If you use a RetainPtr<> here you, don't need the dealloc. > > > Source/WebCore/platform/mac/PlatformSpeechSynthesizerMac.mm:88 > > + WTF::Vector<WTF::RefPtr<WebCore::PlatformSpeechSynthesisVoice> > voiceList = m_synthesizerObject->voiceList(); > > The WTF::'s here are not necessary. > > > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:90 > > + // If the queue is empty, speak this immediately and add it to the queue. > > s/is/was/ > > > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:128 > > +void SpeechSynthesis::handleSpeakingCompleted(SpeechSynthesisUtterance* utterance, bool errorOccurred) > > +{ > > + ASSERT(utterance); > > + > > + m_isSpeaking = false; > > + > > + fireEvent(errorOccurred ? eventNames().errorEvent : eventNames().endEvent, utterance, 0, String()); > > + > > + size_t utteranceIndex = m_utteranceQueue.find(utterance); > > + ASSERT(utteranceIndex != WTF::notFound); > > + if (utteranceIndex != WTF::notFound) > > + m_utteranceQueue.remove(utteranceIndex); > > + > > + // Start the next job if there is one pending. > > + if (m_utteranceQueue.size()) > > + startSpeakingImmediately(m_utteranceQueue.first().get()); > > +} > > Can you have more than one utterance going at once? If so, what is preventing the first utterance in the queue from being passed to startSpeakingImmediately multiple times? You should only be allowed to have one utterance going at a time. Everything will be spoken in the sequential order it was added.
Created attachment 187496 [details] patch
Comment on attachment 187496 [details] patch View in context: https://bugs.webkit.org/attachment.cgi?id=187496&action=review > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:81 > + m_isSpeaking = true; Should this ASSERT(!m_isSpeaking)? > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:82 > + utterance->setStartTime(currentTime()); It might make sense for this to use monotonicallyIncreasingTime() instead. > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:92 > + if (m_utteranceQueue.size() == 1) > + startSpeakingImmediately(utterance); One thing that might make this a bit clearer, is if you pull the utterence off the queue here, and put it in a "currentUtterence" member (then you might be able to get rid of m_isSpeaking). > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:116 > + m_isSpeaking = false; Should this ASSERT(m_isSpeaking)? > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:126 > + if (m_utteranceQueue.size()) You could use isEmpty() here to be a bit clearer. > Source/WebCore/Modules/speech/SpeechSynthesis.h:75 > + Vector<RefPtr<SpeechSynthesisUtterance> > m_utteranceQueue; A Deque might be a bit more efficient.
Good ideas. Will make all changes (In reply to comment #19) > (From update of attachment 187496 [details]) > View in context: https://bugs.webkit.org/attachment.cgi?id=187496&action=review > > > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:81 > > + m_isSpeaking = true; > > Should this ASSERT(!m_isSpeaking)? > > > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:82 > > + utterance->setStartTime(currentTime()); > > It might make sense for this to use monotonicallyIncreasingTime() instead. > > > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:92 > > + if (m_utteranceQueue.size() == 1) > > + startSpeakingImmediately(utterance); > > One thing that might make this a bit clearer, is if you pull the utterence off the queue here, and put it in a "currentUtterence" member (then you might be able to get rid of m_isSpeaking). > > > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:116 > > + m_isSpeaking = false; > > Should this ASSERT(m_isSpeaking)? > > > Source/WebCore/Modules/speech/SpeechSynthesis.cpp:126 > > + if (m_utteranceQueue.size()) > > You could use isEmpty() here to be a bit clearer. > > > Source/WebCore/Modules/speech/SpeechSynthesis.h:75 > > + Vector<RefPtr<SpeechSynthesisUtterance> > m_utteranceQueue; > > A Deque might be a bit more efficient.
http://trac.webkit.org/changeset/142433