Summary: | WebAudio: propagate a silence hint through the AudioNode graph. | ||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | WebKit | Reporter: | Jer Noble <jer.noble> | ||||||||||||||||||||||||||||||||
Component: | New Bugs | Assignee: | Jer Noble <jer.noble> | ||||||||||||||||||||||||||||||||
Status: | RESOLVED FIXED | ||||||||||||||||||||||||||||||||||
Severity: | Normal | CC: | crogers, dglazkov, eric.carlson, feature-media-reviews, rtoy, webkit.review.bot | ||||||||||||||||||||||||||||||||
Priority: | P2 | ||||||||||||||||||||||||||||||||||
Version: | 528+ (Nightly build) | ||||||||||||||||||||||||||||||||||
Hardware: | Unspecified | ||||||||||||||||||||||||||||||||||
OS: | Unspecified | ||||||||||||||||||||||||||||||||||
Bug Depends on: | 74750, 76573 | ||||||||||||||||||||||||||||||||||
Bug Blocks: | 77224 | ||||||||||||||||||||||||||||||||||
Attachments: |
|
Description
Jer Noble
2011-12-14 16:06:56 PST
Created attachment 119332 [details]
Patch
Created attachment 122466 [details]
Patch
Created attachment 122481 [details]
Patch
Merge conflict snuck into the previous patch, causing a compile error.
Comment on attachment 122481 [details] Patch Attachment 122481 [details] did not pass chromium-ews (chromium-xvfb): Output: http://queues.webkit.org/results/11152739 New failing tests: webaudio/convolution-mono-mono.html webaudio/delaynode-scheduling.html webaudio/delaynode.html Created attachment 122674 [details]
Patch
Comment on attachment 122674 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=122674&action=review I like the general idea of avoiding unnecessary processing with sources which haven't yet started playing or are finished playing. The overall patch scares me quite a bit, because it seems like it'll be quite easy for bugs to creep in where we forget to setSilent(false) once we've touched an AudioBus. This patch has one case like that in the end of AudioBus::createBySampleRateConverting(). As we add more code it seems like there could be more potential problems like that which could be quite hard to debug due to the extensive handling of the m_silent state. Maybe we can be slightly less stringent on AudioChannel and AudioBus construction where the initial state of m_silent is always "false", even though technically it is "true". Then it seems we still get nearly all of the advantage of avoiding processing and propagating silence, but we're immune to bugs where we forget to setSilent(false) as I mention above. Put another way, the the AudioBus would be non-silent by default, but would quickly get set to silent (and propagated) in AudioNode::silenceOutputs(). Does that make sense? > Source/WebCore/platform/audio/AudioBus.cpp:349 > + return; This logic seems wrong. If the source is silent then it needs to copy the silence so don't we need to zero()? > Source/WebCore/platform/audio/AudioBus.cpp:351 > + return; Seems like this could be simplified to: if (sumToBus && sourceBusSafe.isSilent()) return; Since adding in silence is a NOP > Source/WebCore/platform/audio/AudioBus.cpp:484 > // Return exact copy. At the end of method AudioBus::createBySampleRateConverting() we have: return destinationBus.release(); But don't we need to say: destinationBus->setSilent(false) since its default state is silent? > Source/WebCore/platform/audio/AudioBus.cpp:527 > + return adoptPtr(new AudioBus(sourceBus->numberOfChannels(), sourceBus->length())); We need to return a mono bus here (number of channels == 1) return adoptPtr(new AudioBus(1, sourceBus->length())); > Source/WebCore/platform/audio/AudioChannel.cpp:60 > + return; This check seems unnecessary since we know that sourceChannel->length() >= length() > Source/WebCore/platform/audio/AudioChannel.cpp:63 > + zero(); Seems like this can be simplified since we know sourceChannel->length() >= length(): if (sourceChannel->isSilent()) zero(); > Source/WebCore/platform/audio/AudioChannel.cpp:113 > + return; Not needed. This check has already been made on lines 104:105 > Source/WebCore/platform/audio/AudioChannel.h:88 > { Shouldn't we do a check for m_silent already being zero? This way we can avoid a memset() > Source/WebCore/webaudio/AudioBufferSourceNode.cpp:512 > + // If we are no longer playing, propogate silence ahead to upstream nodes. Shouldn't "upstream" -> "downstream" ? > Source/WebCore/webaudio/ConvolverNode.h:62 > + virtual bool propagatesSilence() const { return false; } should add comment about "tail time" > Source/WebCore/webaudio/DelayNode.h:48 > + virtual bool propagatesSilence() const { return false; } should add comment about "tail time" (In reply to comment #6) > (From update of attachment 122674 [details]) > View in context: https://bugs.webkit.org/attachment.cgi?id=122674&action=review > > I like the general idea of avoiding unnecessary processing with sources which haven't yet started playing or are finished playing. > > The overall patch scares me quite a bit, because it seems like it'll be quite easy for bugs to creep in where we forget to setSilent(false) once we've touched an AudioBus. This patch has one case like that in the end of AudioBus::createBySampleRateConverting(). As we add more code it seems like there could be more potential problems like that which could be quite hard to debug due to the extensive handling of the m_silent state. > > Maybe we can be slightly less stringent on AudioChannel and AudioBus construction where the initial state of m_silent is always "false", even though technically it is "true". Then it seems we still get nearly all of the advantage of avoiding processing and propagating silence, but we're immune to bugs where we forget to setSilent(false) as I mention above. > > Put another way, the the AudioBus would be non-silent by default, but would quickly get set to silent (and propagated) in AudioNode::silenceOutputs(). Does that make sense? I'm afraid I don't follow. If AudioChannels are non-silent by default, they would have to be marked as silent explicitly. And as soon as they are, the same possibility of a bug (i.e. someone modifying the AudioChannel's memory forgets to call setSilent(false)) is present. Where would the silent flag be turned on in your alternative? The other possibility I considered was that having the AudioChannel::data() accessor clear the silent flag. That would solve the possible future bug at the cost of perhaps doing unnecessary processing after a non-mutating call to AudioChannel::data(). >> Source/WebCore/platform/audio/AudioBus.cpp:349 >> + return; > > This logic seems wrong. If the source is silent then it needs to copy the silence so don't we need to zero()? You're right, this should zero the destination bus. >> Source/WebCore/platform/audio/AudioBus.cpp:351 >> + return; > > Seems like this could be simplified to: > > if (sumToBus && sourceBusSafe.isSilent()) > return; > > Since adding in silence is a NOP Yup. >> Source/WebCore/platform/audio/AudioBus.cpp:484 >> // Return exact copy. > > At the end of method AudioBus::createBySampleRateConverting() we have: > return destinationBus.release(); > > But don't we need to say: destinationBus->setSilent(false) > since its default state is silent? Yes, we do. Changed. >> Source/WebCore/platform/audio/AudioBus.cpp:527 >> + return adoptPtr(new AudioBus(sourceBus->numberOfChannels(), sourceBus->length())); > > We need to return a mono bus here (number of channels == 1) > > return adoptPtr(new AudioBus(1, sourceBus->length())); Whoops. Changed. >> Source/WebCore/platform/audio/AudioChannel.cpp:63 >> + zero(); > > Seems like this can be simplified since we know sourceChannel->length() >= length(): > > if (sourceChannel->isSilent()) > zero(); Indeed. Changed. >> Source/WebCore/platform/audio/AudioChannel.cpp:113 >> + return; > > Not needed. This check has already been made on lines 104:105 Indeed. Removed. >> Source/WebCore/platform/audio/AudioChannel.h:88 >> { > > Shouldn't we do a check for m_silent already being zero? This way we can avoid a memset() I specifically didn't do this. Zero should do a memset() even when m_silent is true, as someone may have called setSilent(true) without zeroing out the channel's data. >> Source/WebCore/webaudio/AudioBufferSourceNode.cpp:512 >> + // If we are no longer playing, propogate silence ahead to upstream nodes. > > Shouldn't "upstream" -> "downstream" ? Well, that depends on your point of view. (and your local gravity.) ;-) Changed. >> Source/WebCore/webaudio/ConvolverNode.h:62 >> + virtual bool propagatesSilence() const { return false; } > > should add comment about "tail time" Sure. >> Source/WebCore/webaudio/DelayNode.h:48 >> + virtual bool propagatesSilence() const { return false; } > > should add comment about "tail time" Ditto. Created attachment 122834 [details]
Patch
Comment on attachment 122674 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=122674&action=review >>> Source/WebCore/platform/audio/AudioChannel.h:88 >>> { >> >> Shouldn't we do a check for m_silent already being zero? This way we can avoid a memset() > > I specifically didn't do this. Zero should do a memset() even when m_silent is true, as someone may have called setSilent(true) without zeroing out the channel's data. But searching through this patch there's no place where setSilent(true) is actually called (only setSilent(false)) so currently this case never comes up. One possibility is to get rid of the setSilent(bool) method and replace all current calls of setSilent(false) with a new method called "unzero()" which would be symmetric with "zero()" That way you should be able to make the optimization in zero() (I think) (In reply to comment #7) > (In reply to comment #6) > > (From update of attachment 122674 [details] [details]) > > View in context: https://bugs.webkit.org/attachment.cgi?id=122674&action=review > > > > I like the general idea of avoiding unnecessary processing with sources which haven't yet started playing or are finished playing. > > > > The overall patch scares me quite a bit, because it seems like it'll be quite easy for bugs to creep in where we forget to setSilent(false) once we've touched an AudioBus. This patch has one case like that in the end of AudioBus::createBySampleRateConverting(). As we add more code it seems like there could be more potential problems like that which could be quite hard to debug due to the extensive handling of the m_silent state. > > > > Maybe we can be slightly less stringent on AudioChannel and AudioBus construction where the initial state of m_silent is always "false", even though technically it is "true". Then it seems we still get nearly all of the advantage of avoiding processing and propagating silence, but we're immune to bugs where we forget to setSilent(false) as I mention above. > > > > Put another way, the the AudioBus would be non-silent by default, but would quickly get set to silent (and propagated) in AudioNode::silenceOutputs(). Does that make sense? > > I'm afraid I don't follow. If AudioChannels are non-silent by default, they would have to be marked as silent explicitly. And as soon as they are, the same possibility of a bug (i.e. someone modifying the AudioChannel's memory forgets to call setSilent(false)) is present. > > Where would the silent flag be turned on in your alternative? Yes, it's true that bugs could happen as soon as the flag is turned on, but if it's default state is off, then at least the bugs dealing with freshly created (and then modified) AudioBus objects could be avoided. In my suggestion, the silent flag gets turned on in the AudioNode::silenceOutputs() method (for example if an AudioBufferSourceNode outputs silence). But maybe your suggestion about data() is the right approach for solving this. > > The other possibility I considered was that having the AudioChannel::data() accessor clear the silent flag. That would solve the possible future bug at the cost of perhaps doing unnecessary processing after a non-mutating call to AudioChannel::data(). Maybe this is better. It seems like we'd want to be careful to distinguish the "const" vs. "non-const" cases. Can you think of any other downside? It seems like if we access the pointer for possible writing, then we should assume the worst (writing non-zero values). And maybe avoid these types of bugs... Comment on attachment 122834 [details] Patch Attachment 122834 [details] did not pass chromium-ews (chromium-xvfb): Output: http://queues.webkit.org/results/11282013 New failing tests: webaudio/panner-equalpower.html (In reply to comment #10) > (In reply to comment #7) > > I'm afraid I don't follow. If AudioChannels are non-silent by default, they would have to be marked as silent explicitly. And as soon as they are, the same possibility of a bug (i.e. someone modifying the AudioChannel's memory forgets to call setSilent(false)) is present. > > > > Where would the silent flag be turned on in your alternative? > > Yes, it's true that bugs could happen as soon as the flag is turned on, but if it's default state is off, then at least the bugs dealing with freshly created (and then modified) AudioBus objects could be avoided. In my suggestion, the silent flag gets turned on in the AudioNode::silenceOutputs() method (for example if an AudioBufferSourceNode outputs silence). > > But maybe your suggestion about data() is the right approach for solving this. > > > > > The other possibility I considered was that having the AudioChannel::data() accessor clear the silent flag. That would solve the possible future bug at the cost of perhaps doing unnecessary processing after a non-mutating call to AudioChannel::data(). > > Maybe this is better. It seems like we'd want to be careful to distinguish the "const" vs. "non-const" cases. Can you think of any other downside? It seems like if we access the pointer for possible writing, then we should assume the worst (writing non-zero values). And maybe avoid these types of bugs... I've reworked my patch so that the non-const "data()" accessor clears the silent flag. I'm not entirely happy with it, though. It's not obvious that one call to data() will clear the flag, but the other one won't. I'm considering renaming the const "data()" accessor something more obvious, like "safeData()". Or we could take a cue from Foundation and rename the accessors "mutableData()" and "data()". There is also at least one portion of the code where the pointer result of "data()" is stored, and then "setSilent(false)" or "unzero()" is called. There may not be a solution for this problem though. Comment on attachment 122834 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=122834&action=review I'm not seeing the change to data() in this latest patch. Also, I'm interested in your thoughts about the idea of getting rid of the setSilent() method and, in its place, adding an unzero() method. That way, you should be able to get the optimization in zero() where you can avoid repeatedly calling memset on an already zero buffer. > Source/WebCore/webaudio/ConvolverNode.h:64 > + virtual bool propagatesSilence() const { return false; } We need to do the same with the BiquadFilterNode since it has a non-zero tailTime. It may sound odd, but I've played with chaining tens of thousands of BiquadFilterNodes together (configured as allpass) to artificially generate impulse responses of > 5 seconds length. (In reply to comment #13) > (From update of attachment 122834 [details]) > View in context: https://bugs.webkit.org/attachment.cgi?id=122834&action=review > > I'm not seeing the change to data() in this latest patch. Also, I'm interested in your thoughts about the idea of getting rid of the setSilent() method and, in its place, adding an unzero() method. That way, you should be able to get the optimization in zero() where you can avoid repeatedly calling memset on an already zero buffer. Oh, I haven't posted it yet. I'm still working out the side-effects of renaming "data()" to "mutableData()". > > Source/WebCore/webaudio/ConvolverNode.h:64 > > + virtual bool propagatesSilence() const { return false; } > > We need to do the same with the BiquadFilterNode since it has a non-zero tailTime. It may sound odd, but I've played with chaining tens of thousands of BiquadFilterNodes together (configured as allpass) to artificially generate impulse responses of > 5 seconds length. Interesting. I only found those two nodes because, with this patch, their tests started failing. I don't suppose there's tests for BiquadFilterNode yet? So, in renaming data() -> mutableData(), I've found a lot of places where const-correctness is being broken, and even places where source data is being modified. I'm still working out all the kinks, and as soon as I do, I'll get it up for review. Created attachment 123004 [details] Patch New patch, this one dependent on the patch in bug #76573. Created attachment 123173 [details] Patch Patch rebased after landing fix for bug #76573. Comment on attachment 123173 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=123173&action=review Looks pretty good overall so far. > Source/WebCore/platform/audio/AudioBus.cpp:528 > + destinationBus->unzero(); Technically, we don't need the unzero() anymore since we're calling mutableData() > Source/WebCore/platform/audio/AudioBus.cpp:555 > + destinationBus->unzero(); I don't think we need the unzero() anymore because of mutableData() call > Source/WebCore/webaudio/DelayNode.h:50 > + virtual bool propagatesSilence() const { return false; } more nodes to handle: 1. BiquadFilterNode (which is an IIR filter with tail-time) 2. AudioPannerNode (if configured to HRTFPanner) since it has both a tail-time and a latency 3. DynamicsCompressorNode (has a small latency I think) 4. JavaScriptAudioNode -- latency 5. MediaElementAudioSourceNode could add logic to propagate silence in some cases -- though maybe not worth the trouble at this point Created attachment 130672 [details]
Patch
Comment on attachment 130672 [details] Patch Attachment 130672 [details] did not pass chromium-ews (chromium-xvfb): Output: http://queues.webkit.org/results/11850310 This new patch depends on the one in bug #74750, so expect to see a lot of breakage in EWS bots. Created attachment 130690 [details]
Patch
Addressed Chris's comments w.r.t. tailTime for certain classes in this bug, and the rest of the classes in the tailTime() bug.
Comment on attachment 123173 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=123173&action=review >> Source/WebCore/platform/audio/AudioBus.cpp:528 >> + destinationBus->unzero(); > > Technically, we don't need the unzero() anymore since we're calling mutableData() Okay, removed. >> Source/WebCore/platform/audio/AudioBus.cpp:555 >> + destinationBus->unzero(); > > I don't think we need the unzero() anymore because of mutableData() call Ditto. >> Source/WebCore/webaudio/DelayNode.h:50 >> + virtual bool propagatesSilence() const { return false; } > > more nodes to handle: > 1. BiquadFilterNode (which is an IIR filter with tail-time) > 2. AudioPannerNode (if configured to HRTFPanner) since it has both a tail-time and a latency > 3. DynamicsCompressorNode (has a small latency I think) > 4. JavaScriptAudioNode -- latency > 5. MediaElementAudioSourceNode could add logic to propagate silence in some cases -- though maybe not worth the trouble at this point 1. How would one calculate the tail-time? Do you have to plumb all the way down to Biquad, and if so, does that mean the tail time is two samples? 2. Same deal. 3. Same deal. 4. JavaScriptAudioNode should probably just never be marked as propagating silence, since it can be used for sound generation as well as processing. 5. How would MediaElementAudioSourceNode propagate audio? (Whoops, forgot to submit this message. I addressed points 1->3 in bug #74750 and 4->5 in this one.) Comment on attachment 130690 [details] Patch Attachment 130690 [details] did not pass chromium-ews (chromium-xvfb): Output: http://queues.webkit.org/results/11849383 Comment on attachment 130690 [details] Patch Attachment 130690 [details] did not pass mac-ews (mac): Output: http://queues.webkit.org/results/11864023 (In reply to comment #22) > (From update of attachment 123173 [details]) > View in context: https://bugs.webkit.org/attachment.cgi?id=123173&action=review > > >> Source/WebCore/platform/audio/AudioBus.cpp:528 > >> + destinationBus->unzero(); > > > > Technically, we don't need the unzero() anymore since we're calling mutableData() > > Okay, removed. > > >> Source/WebCore/platform/audio/AudioBus.cpp:555 > >> + destinationBus->unzero(); > > > > I don't think we need the unzero() anymore because of mutableData() call > > Ditto. > > >> Source/WebCore/webaudio/DelayNode.h:50 > >> + virtual bool propagatesSilence() const { return false; } > > > > more nodes to handle: > > 1. BiquadFilterNode (which is an IIR filter with tail-time) > > 2. AudioPannerNode (if configured to HRTFPanner) since it has both a tail-time and a latency > > 3. DynamicsCompressorNode (has a small latency I think) > > 4. JavaScriptAudioNode -- latency > > 5. MediaElementAudioSourceNode could add logic to propagate silence in some cases -- though maybe not worth the trouble at this point > > 1. How would one calculate the tail-time? Do you have to plumb all the way down to Biquad, and if so, does that mean the tail time is two samples? The tail time is not 2 samples, but theoretically infinite. It's probably close enough to zero after 128 or 256 samples. The convolution-mono-mono test fails because the output is cut short due to the (current) latency of the convolver. There is a latency of 128 samples. If we hack tailTime() to add 128 samples for the latency, the test passes. It seems wrong to add the latency to tailTime(). Should we add a latencyTime() method to get the latency of a node? This additional time would be added in propagatesSilence(). (In reply to comment #25) > The tail time is not 2 samples, but theoretically infinite. It's probably close enough to zero after 128 or 256 samples. > > The convolution-mono-mono test fails because the output is cut short due to the (current) latency of the convolver. There is a latency of 128 samples. If we hack tailTime() to add 128 samples for the latency, the test passes. That test doesn't seem to use a BiquadNode. All I see is a ConvolverNode, which has a tailTime() of impulseResponseLength() / sampleRate(), which looks to be 1s or 44k samples. Is a BiquadNode implicitly created? > It seems wrong to add the latency to tailTime(). Should we add a latencyTime() method to get the latency of a node? This additional time would be added in propagatesSilence(). I would suggest that they are entirely separate concepts: Tail time is the amount of time after the last non-silent input that the output is non-silent. Latency is the amount of time after the first non-silent input that the output is non-silent. I wouldn't suggest changing the definition of tail time to be the amount of time after the first non-silent output that the output is non-silent. (In reply to comment #26) > (In reply to comment #25) > > The tail time is not 2 samples, but theoretically infinite. It's probably close enough to zero after 128 or 256 samples. > > > > The convolution-mono-mono test fails because the output is cut short due to the (current) latency of the convolver. There is a latency of 128 samples. If we hack tailTime() to add 128 samples for the latency, the test passes. > > That test doesn't seem to use a BiquadNode. All I see is a ConvolverNode, which has a tailTime() of impulseResponseLength() / sampleRate(), which looks to be 1s or 44k samples. Is a BiquadNode implicitly created? Sorry. I did not mean to imply that the biquad had anything to do with the convolution-mono-mono test. There is no biquad in the test. I was just pointing out why that test fails with the current patch. > > > It seems wrong to add the latency to tailTime(). Should we add a latencyTime() method to get the latency of a node? This additional time would be added in propagatesSilence(). > > I would suggest that they are entirely separate concepts: > Tail time is the amount of time after the last non-silent input that the output is non-silent. > Latency is the amount of time after the first non-silent input that the output is non-silent. Agreed. > > I wouldn't suggest changing the definition of tail time to be the amount of time after the first non-silent output that the output is non-silent. Agreed. So we should add a latencyTime() method? (In reply to comment #27) > (In reply to comment #26) > > (In reply to comment #25) > > > The tail time is not 2 samples, but theoretically infinite. It's probably close enough to zero after 128 or 256 samples. > > > > > > The convolution-mono-mono test fails because the output is cut short due to the (current) latency of the convolver. There is a latency of 128 samples. If we hack tailTime() to add 128 samples for the latency, the test passes. > > > > That test doesn't seem to use a BiquadNode. All I see is a ConvolverNode, which has a tailTime() of impulseResponseLength() / sampleRate(), which looks to be 1s or 44k samples. Is a BiquadNode implicitly created? > > Sorry. I did not mean to imply that the biquad had anything to do with the convolution-mono-mono test. There is no biquad in the test. I was just pointing out why that test fails with the current patch. Ah, it appears it's an off-by-one error. propagateSilence() should be checking (logically speaking) from tailTime() + firstSilentTime, instead of tailTime() + lastNonSilentTime. Effectively, we're setting lastNonSilentTime to the first sample in currentTime, instead of the last sample. > > > It seems wrong to add the latency to tailTime(). Should we add a latencyTime() method to get the latency of a node? This additional time would be added in propagatesSilence(). > > > > I would suggest that they are entirely separate concepts: > > Tail time is the amount of time after the last non-silent input that the output is non-silent. > > Latency is the amount of time after the first non-silent input that the output is non-silent. > > Agreed. > > > > > I wouldn't suggest changing the definition of tail time to be the amount of time after the first non-silent output that the output is non-silent. > > Agreed. > > So we should add a latencyTime() method? That seems like an improvement, but perhaps one for another bug? (In reply to comment #28) > (In reply to comment #27) > > (In reply to comment #26) > > > (In reply to comment #25) > > > > The tail time is not 2 samples, but theoretically infinite. It's probably close enough to zero after 128 or 256 samples. > > > > > > > > The convolution-mono-mono test fails because the output is cut short due to the (current) latency of the convolver. There is a latency of 128 samples. If we hack tailTime() to add 128 samples for the latency, the test passes. > > > > > > That test doesn't seem to use a BiquadNode. All I see is a ConvolverNode, which has a tailTime() of impulseResponseLength() / sampleRate(), which looks to be 1s or 44k samples. Is a BiquadNode implicitly created? > > > > Sorry. I did not mean to imply that the biquad had anything to do with the convolution-mono-mono test. There is no biquad in the test. I was just pointing out why that test fails with the current patch. > > Ah, it appears it's an off-by-one error. propagateSilence() should be checking (logically speaking) from tailTime() + firstSilentTime, instead of tailTime() + lastNonSilentTime. Effectively, we're setting lastNonSilentTime to the first sample in currentTime, instead of the last sample. > > > > > It seems wrong to add the latency to tailTime(). Should we add a latencyTime() method to get the latency of a node? This additional time would be added in propagatesSilence(). > > > > > > I would suggest that they are entirely separate concepts: > > > Tail time is the amount of time after the last non-silent input that the output is non-silent. > > > Latency is the amount of time after the first non-silent input that the output is non-silent. > > > > Agreed. > > > > > > > > I wouldn't suggest changing the definition of tail time to be the amount of time after the first non-silent output that the output is non-silent. > > > > Agreed. > > > > So we should add a latencyTime() method? > > That seems like an improvement, but perhaps one for another bug? Yes. Didn't mean to imply that latencyTime() should be added in this bug. This also means that the delay node does not have a tailTime but does have a latencyTime? (In reply to comment #29) > (In reply to comment #28) > > (In reply to comment #27) > > > (In reply to comment #26) > > > > (In reply to comment #25) > > > > > The tail time is not 2 samples, but theoretically infinite. It's probably close enough to zero after 128 or 256 samples. > > > > > > > > > > The convolution-mono-mono test fails because the output is cut short due to the (current) latency of the convolver. There is a latency of 128 samples. If we hack tailTime() to add 128 samples for the latency, the test passes. > > > > > > > > That test doesn't seem to use a BiquadNode. All I see is a ConvolverNode, which has a tailTime() of impulseResponseLength() / sampleRate(), which looks to be 1s or 44k samples. Is a BiquadNode implicitly created? > > > > > > Sorry. I did not mean to imply that the biquad had anything to do with the convolution-mono-mono test. There is no biquad in the test. I was just pointing out why that test fails with the current patch. > > > > Ah, it appears it's an off-by-one error. propagateSilence() should be checking (logically speaking) from tailTime() + firstSilentTime, instead of tailTime() + lastNonSilentTime. Effectively, we're setting lastNonSilentTime to the first sample in currentTime, instead of the last sample. > > > > > > > It seems wrong to add the latency to tailTime(). Should we add a latencyTime() method to get the latency of a node? This additional time would be added in propagatesSilence(). > > > > > > > > I would suggest that they are entirely separate concepts: > > > > Tail time is the amount of time after the last non-silent input that the output is non-silent. > > > > Latency is the amount of time after the first non-silent input that the output is non-silent. > > > > > > Agreed. > > > > > > > > > > > I wouldn't suggest changing the definition of tail time to be the amount of time after the first non-silent output that the output is non-silent. > > > > > > Agreed. > > > > > > So we should add a latencyTime() method? > > > > That seems like an improvement, but perhaps one for another bug? > > Yes. Didn't mean to imply that latencyTime() should be added in this bug. > > This also means that the delay node does not have a tailTime but does have a latencyTime? No, it'd have both. (And they'd be equal.) If a delay node had a latency of 1s and a tail time of 0s, it would mean changing the definition of tail time to be "the amount of time after the first non-silent /output/ that the output is non-silent." In the end it's a "six in one hand, half dozen in the other" thing. I'm fine with changing the definition of tailTime() in a latencyTime() patch. Created attachment 130873 [details]
Patch
This patch should fix the convolution-mono-mono test by accounting for Convolution latency and by fixing an off-by-one error in AudioNode::propagatesSilence()
(In reply to comment #30) > If a delay node had a latency of 1s and a tail time of 0s, it would mean changing the definition of tail time to be "the amount of time after the first non-silent /output/ that the output is non-silent." Wait, no, I think I confused myself. This doesn't sound right. Regardless, I'd be fine changing the definition of tailTime() in a patch which added latencyTime(). :) (In reply to comment #32) > (In reply to comment #30) > > If a delay node had a latency of 1s and a tail time of 0s, it would mean changing the definition of tail time to be "the amount of time after the first non-silent /output/ that the output is non-silent." > > Wait, no, I think I confused myself. This doesn't sound right. Regardless, I'd be fine changing the definition of tailTime() in a patch which added latencyTime(). :) Do we need to change the definition of tailTime()? Your definition of tailTime as the length of time that there is non-zero output after getting zero inputs seems right. Latency would be the length of time when non-zero input produces zero output. So once a node has zero inputs (after getting non-zero inputs), the node needs to keep producing output for latencyTime() + tailTime() seconds. (In reply to comment #33) > (In reply to comment #32) > > (In reply to comment #30) > > > If a delay node had a latency of 1s and a tail time of 0s, it would mean changing the definition of tail time to be "the amount of time after the first non-silent /output/ that the output is non-silent." > > > > Wait, no, I think I confused myself. This doesn't sound right. Regardless, I'd be fine changing the definition of tailTime() in a patch which added latencyTime(). :) > > Do we need to change the definition of tailTime()? Your definition of tailTime as the length of time that there is non-zero output after getting zero inputs seems right. > > Latency would be the length of time when non-zero input produces zero output. > > So once a node has zero inputs (after getting non-zero inputs), the node needs to keep producing output for latencyTime() + tailTime() seconds. Yes exactly. Here's some (bad) ASCII art to demonstrate: Current Code w/Delay Node (1s): time 0s 1s 2s ---------------------- Input: ++++++++++++---------- Output: ------------++++++++++ tailTime ^---------^ (1s) latency ^----------^ (1s) Proposed Code w/Delay Node (1s): time 0s 1s 2s ---------------------- Input: ++++++++++++---------- Output: ------------++++++++++ tailTime ^ (0s) latency ^----------^ (1s) Current Code w/ConvolutionNode: time 0s 1s 2s ---------------------- Input: ++++++++++++---------- Output: -+++++++++++++++++++++ tailTime:^--------------------^ (1.002s) latency ^^ (.002s) Proposed Code w/ConvolutionNode: time 0s 1s 2s ---------------------- Input: ++++++++++++---------- Output: -+++++++++++++++++++++ tailTime: ^-------------------^ (1s) latency ^^ (.002s) Previously, the definition of tailTime() /included/ latency. The new definition would be /in addition/ to latency. So basically, yes I agree with you. :) (In reply to comment #34) > (In reply to comment #33) > Yes exactly. Here's some (bad) ASCII art to demonstrate: [nice diagrams snipped] > > Previously, the definition of tailTime() /included/ latency. The new definition would be /in addition/ to latency. > > So basically, yes I agree with you. :) :-) Do we really need to separate out the latency time and the tail time? Maybe we can just call it minimumProcessingTime which would include the latency time and the tail time? The definition might be the time from the application of an impulse until the effect of the impulse has been completely output. (Well, for biquads, that would be an approximation of the effect, since the impulse response is infinite.) Just some added notes about the way I think of these terms: "latency" is an additional (usually undesirable) extra delay that a given processing algorithm incurs. This does *not* include a delay which is an expected part of the processing such as the DelayNode (it's expected to incur a delay - that's what the effect is). The DelayNode, on the other hand, has a tailTime equal to the delay time. The HRTFPanner currently has both a latency and a tailTime. It uses an FFT which has an fftSize/2 latency, but it's conceivable to use a "direct" convolution algorithm with zero latency, but it would still have a tailTime. Similarly, the ConvolverNode has a latency, but Xingnan is working on a fix for that: https://bugs.webkit.org/show_bug.cgi?id=75564 Once that's fixed, then the ConvolverNode would have zero latency, but still a tailTime equal to the length of the impulse response (which can be significantly long). The DynamicsProcessorNode has a latency caused by its "pre-delay" or "look-ahead" delay, and is fairly small, but no tailTime... Comment on attachment 130873 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=130873&action=review > Source/WebCore/platform/audio/AudioBus.cpp:434 > + I think it's clearer to write lines 427-433 as if (sourceBusSafe.isSilent()) { if (!sumToBus) zero(); return; } > Source/WebCore/platform/audio/AudioBus.cpp:521 > + if (sourceBus.length() == numberOfGainValues && length() == sourceBus.length() && sourceBus.isSilent()) { Use sourceBus.length() == length() instead of the reverse? > Source/WebCore/platform/audio/AudioBus.cpp:571 > + Is there any possibility of a problem if we return an AudioBus of sourceBus->length() instead of actual destinationLength? Do we need to set the sample rate to the new sample rate? > Source/WebCore/platform/audio/AudioBus.h:87 > + void unzero(); Can we call this clearSilentFlag or something? Everytime I read unzero, I think it means make the channel non-zero in some magical way. (Because zero() actually sets the channel to all zeroes.) > Source/WebCore/platform/audio/AudioChannel.cpp:65 > + memcpy(mutableData(), sourceChannel->data(), sizeof(float) * length()); I think the logic is clearer this way: if (sourceChannel->isSilent()) { if (!isSilent()) zero(); } else memcpy(...) > Source/WebCore/platform/audio/AudioChannel.cpp:119 > float max = 0.0f; nit: This isn't part of the patch, but this is a style issue. I think it should just be 0, not 0.0f. > Source/WebCore/platform/audio/AudioChannel.h:106 > + void unzero() { m_silent = false; } Same comment as for unzero() in AudioBus.h. Can we use a different name like clearSilentFlag? > Source/WebCore/webaudio/AudioBufferSourceNode.h:86 > + virtual bool propagatesSilence() const; Add comment on what propagateSilence() does. (Steal from comment in propagatesSilence() in AudioBufferSourceNode.cpp) > Source/WebCore/webaudio/AudioNode.cpp:45 > , m_lastProcessingTime(-1.0) nit: Shouldn't -1.0 be -1 for m_lastProcessingTime? (Not part of your changes, though.) > Source/WebCore/webaudio/AudioNode.cpp:184 > + m_lastNonSilentTime = currentTime + (framesToProcess / m_sampleRate); There is a round-off issue here because m_sampleRate is a float so framesToProcess / m_sampleRate only has float accuracy. I think the correct way to compute this is m_lastNonSilentTime = (context()->currentSampleFrame() + framesToProcess) / static_cast<double>(m_sampleRate); > Source/WebCore/webaudio/AudioNode.h:143 > + virtual bool propagatesSilence() const; Add comment for what propagatesSilence() means. (In reply to comment #37) > (From update of attachment 130873 [details]) > View in context: https://bugs.webkit.org/attachment.cgi?id=130873&action=review Just one other thing that didn't show up in the patch. In AudioBufferSourceNode.h, near line 119. There's a FIXME about propagating a silent hint. This should be deleted. Created attachment 132888 [details]
Patch
(In reply to comment #39) > Created an attachment (id=132888) [details] > Patch This last patch didn't address any of Raymond's latest comments. I'll upload a new one which does. Comment on attachment 130873 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=130873&action=review >> Source/WebCore/platform/audio/AudioBus.cpp:434 >> + > > I think it's clearer to write lines 427-433 as > > if (sourceBusSafe.isSilent()) { > if (!sumToBus) > zero(); > return; > } Changed. >> Source/WebCore/platform/audio/AudioBus.cpp:521 >> + if (sourceBus.length() == numberOfGainValues && length() == sourceBus.length() && sourceBus.isSilent()) { > > Use sourceBus.length() == length() instead of the reverse? Changed. >> Source/WebCore/platform/audio/AudioBus.cpp:571 >> + > > Is there any possibility of a problem if we return an AudioBus of sourceBus->length() instead of actual destinationLength? Do we need to set the sample rate to the new sample rate? You're right: the bus will be the wrong number of seconds long. I don't know if that's likely to be a problem, but just in case, lets fix it. Changed. >> Source/WebCore/platform/audio/AudioBus.h:87 >> + void unzero(); > > Can we call this clearSilentFlag or something? Everytime I read unzero, I think it means make the channel non-zero in some magical way. (Because zero() actually sets the channel to all zeroes.) Sure thing. Changed. >> Source/WebCore/platform/audio/AudioChannel.cpp:65 >> + memcpy(mutableData(), sourceChannel->data(), sizeof(float) * length()); > > I think the logic is clearer this way: > > if (sourceChannel->isSilent()) { > if (!isSilent()) > zero(); > } else > memcpy(...) Changed. >> Source/WebCore/platform/audio/AudioChannel.cpp:119 >> float max = 0.0f; > > nit: This isn't part of the patch, but this is a style issue. I think it should just be 0, not 0.0f. Opportunity change! >> Source/WebCore/webaudio/AudioBufferSourceNode.h:86 >> + virtual bool propagatesSilence() const; > > Add comment on what propagateSilence() does. (Steal from comment in propagatesSilence() in AudioBufferSourceNode.cpp) Added. >> Source/WebCore/webaudio/AudioNode.cpp:45 >> , m_lastProcessingTime(-1.0) > > nit: Shouldn't -1.0 be -1 for m_lastProcessingTime? (Not part of your changes, though.) Another opportunity changce! >> Source/WebCore/webaudio/AudioNode.cpp:184 >> + m_lastNonSilentTime = currentTime + (framesToProcess / m_sampleRate); > > There is a round-off issue here because m_sampleRate is a float so framesToProcess / m_sampleRate only has float accuracy. I think the correct way to compute this is > > m_lastNonSilentTime = (context()->currentSampleFrame() + framesToProcess) / static_cast<double>(m_sampleRate); Changed. >> Source/WebCore/webaudio/AudioNode.h:143 >> + virtual bool propagatesSilence() const; > > Add comment for what propagatesSilence() means. Added. Created attachment 132919 [details]
Patch
(In reply to comment #42) > Created an attachment (id=132919) [details] > Patch The silence hint in this patch is too aggressive. Nothing gets played out now. There's something different between this latest patch and the tenth patch which worked very nicely. (In reply to comment #43) > (In reply to comment #42) > > Created an attachment (id=132919) [details] [details] > > Patch > > The silence hint in this patch is too aggressive. Nothing gets played out now. > > There's something different between this latest patch and the tenth patch which worked very nicely. I'm looking into it. (In reply to comment #44) > (In reply to comment #43) > > (In reply to comment #42) > > > Created an attachment (id=132919) [details] [details] [details] > > > Patch > > > > The silence hint in this patch is too aggressive. Nothing gets played out now. > > > > There's something different between this latest patch and the tenth patch which worked very nicely. > > I'm looking into it. I think it's AudioBufferSourceNode::isPlaying. I think isPlaying should be true if the playbackState is either PLAYING_STATE or SCHEDULED_STATE. Kind of weird, but that's what the old m_isPlaying variable used to mean, and it has carried over into the new playbackState. (In reply to comment #45) > (In reply to comment #44) > > (In reply to comment #43) > > > (In reply to comment #42) > > > > Created an attachment (id=132919) [details] [details] [details] [details] > > > > Patch > > > > > > The silence hint in this patch is too aggressive. Nothing gets played out now. > > > > > > There's something different between this latest patch and the tenth patch which worked very nicely. > > > > I'm looking into it. > > I think it's AudioBufferSourceNode::isPlaying. I think isPlaying should be true if the playbackState is either PLAYING_STATE or SCHEDULED_STATE. Kind of weird, but that's what the old m_isPlaying variable used to mean, and it has carried over into the new playbackState. Yes, changing isPlaying that way makes it all work again. Created attachment 133069 [details]
Patch
Changed the definition of isPlaying() to match the previous m_isPlaying variable. All tests now passing, save an ASSERT crash in webaudio/convolution-mono-mono.html and a diff mismatch in webaudio/audiobuffersource-channels.html which needs platform-specific rebasing.
(In reply to comment #47) > Created an attachment (id=133069) [details] > Patch > > Changed the definition of isPlaying() to match the previous m_isPlaying variable. All tests now passing, save an ASSERT crash in webaudio/convolution-mono-mono.html and a diff mismatch in webaudio/audiobuffersource-channels.html which needs platform-specific rebasing. Perhaps the conv crash is due to https://bugs.webkit.org/show_bug.cgi?id=81744. If so, it looks like we may have to change the latency time for the convolver. (In reply to comment #48) > (In reply to comment #47) > > Created an attachment (id=133069) [details] [details] > > Patch > > > > Changed the definition of isPlaying() to match the previous m_isPlaying variable. All tests now passing, save an ASSERT crash in webaudio/convolution-mono-mono.html and a diff mismatch in webaudio/audiobuffersource-channels.html which needs platform-specific rebasing. > > Perhaps the conv crash is due to https://bugs.webkit.org/show_bug.cgi?id=81744. Yup, that's the one. > If so, it looks like we may have to change the latency time for the convolver. Actually, it looks like the patch posted at the bug above solves the problem (in this crash, m_directMode is set to true). (In reply to comment #49) > (In reply to comment #48) > > (In reply to comment #47) > > > Created an attachment (id=133069) [details] [details] [details] > > > Patch > > > > > > Changed the definition of isPlaying() to match the previous m_isPlaying variable. All tests now passing, save an ASSERT crash in webaudio/convolution-mono-mono.html and a diff mismatch in webaudio/audiobuffersource-channels.html which needs platform-specific rebasing. > > > > Perhaps the conv crash is due to https://bugs.webkit.org/show_bug.cgi?id=81744. > > Yup, that's the one. > > > If so, it looks like we may have to change the latency time for the convolver. > > Actually, it looks like the patch posted at the bug above solves the problem (in this crash, m_directMode is set to true). Actually, I meant that if the convolver is doing a direct convolution now, the latency is gone, so we will need to change latencyTime to account for m_directMode. (But if we don't do it now, I guess it just means process a little longer than absolutely necessary.) (In reply to comment #50) > (In reply to comment #49) > > (In reply to comment #48) > > > (In reply to comment #47) > > > > Created an attachment (id=133069) [details] [details] [details] [details] > > > > Patch > > > > > > > > Changed the definition of isPlaying() to match the previous m_isPlaying variable. All tests now passing, save an ASSERT crash in webaudio/convolution-mono-mono.html and a diff mismatch in webaudio/audiobuffersource-channels.html which needs platform-specific rebasing. > > > > > > Perhaps the conv crash is due to https://bugs.webkit.org/show_bug.cgi?id=81744. > > > > Yup, that's the one. > > > > > If so, it looks like we may have to change the latency time for the convolver. > > > > Actually, it looks like the patch posted at the bug above solves the problem (in this crash, m_directMode is set to true). > > Actually, I meant that if the convolver is doing a direct convolution now, the latency is gone, so we will need to change latencyTime to account for m_directMode. (But if we don't do it now, I guess it just means process a little longer than absolutely necessary.) Yup. Looks like a FIXME in ReverbConvolver::latencyFrames() got fixed by https://bugs.webkit.org/show_bug.cgi?id=75564. That function should now be able to return 0. But that can happen in another bug. :) (In reply to comment #51) > Yup. Looks like a FIXME in ReverbConvolver::latencyFrames() got fixed by https://bugs.webkit.org/show_bug.cgi?id=75564. That function should now be able to return 0. But that can happen in another bug. :) I filed bug #81806 to track that FIXME change. Comment on attachment 133069 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=133069&action=review This looks very nice! Just a few comments. Probably want to add the dependency to bug 81744 to get the assert crash fixed before committing this. > Source/WebCore/ChangeLog:40 > + (WebCore::AudioFileReader::createBus): Clear the bus's silent bit. This isn't part of the patch. Should it be? > Source/WebCore/ChangeLog:64 > + * webaudio/ConvolverNode.h: Doesn't seem to be in the patch anymore. > Source/WebCore/platform/audio/AudioBus.cpp:571 > + int destinationLength = sourceLength / sampleRateRatio; Should the code to compute the destination length be made a function? There's identical code at 10-15 lines farther down. > Source/WebCore/webaudio/AudioBufferSourceNode.h:89 > + bool hasFinished() const { return m_playbackState == FINISHED_STATE; } Not for you to fix, but the AudioBufferSourceNode code should probably use these new functions in more places. > Source/WebCore/webaudio/AudioNode.cpp:208 > + return m_lastNonSilentTime + tailTime() < context()->currentTime(); I think we need to add latencyTime() into this sum. If a node has latency, but no tail, we'll silence the output before the latency time has passed, so nothing will be output. (Adding the latency was needed earlier for the convolver because the output was cut short in the convolver test.) Also, this implementation doesn't match the comment in AudioNode.h for propagatesSilence(). :-) > Source/WebCore/webaudio/DelayNode.h:50 > + virtual bool propagatesSilence() const { return false; } The comment seems wrong now that we have tailTime implemented. Also, why do we need a special implementation of propagatesSilence for a delay node. Won't the default work? Just a quick note that this patch needs to be updated to handle the new Oscillator node. Without an update the oscillator produces no output because the silent flag is true for the oscillator output. (In reply to comment #53) > (From update of attachment 133069 [details]) > View in context: https://bugs.webkit.org/attachment.cgi?id=133069&action=review > > This looks very nice! Just a few comments. Probably want to add the dependency to bug 81744 to get the assert crash fixed before committing this. > > > Source/WebCore/ChangeLog:40 > > + (WebCore::AudioFileReader::createBus): Clear the bus's silent bit. > > This isn't part of the patch. Should it be? > > > Source/WebCore/ChangeLog:64 > > + * webaudio/ConvolverNode.h: > > Doesn't seem to be in the patch anymore. Removed these two references. > > Source/WebCore/platform/audio/AudioBus.cpp:571 > > + int destinationLength = sourceLength / sampleRateRatio; > > Should the code to compute the destination length be made a function? There's identical code at 10-15 lines farther down. It doesn't actually save any code; you still need the calculated sampleRateRatio and sourceLength down in the for-loop at the end of the function. You can save at least one LOC by not recalculating the sampleRateRatio, so I'll do that at least. :) > > Source/WebCore/webaudio/AudioNode.cpp:208 > > + return m_lastNonSilentTime + tailTime() < context()->currentTime(); > > I think we need to add latencyTime() into this sum. If a node has latency, but no tail, we'll silence the output before the latency time has passed, so nothing will be output. (Adding the latency was needed earlier for the convolver because the output was cut short in the convolver test.) > > Also, this implementation doesn't match the comment in AudioNode.h for propagatesSilence(). :-) Sure thing. > > Source/WebCore/webaudio/DelayNode.h:50 > > + virtual bool propagatesSilence() const { return false; } > > The comment seems wrong now that we have tailTime implemented. > > Also, why do we need a special implementation of propagatesSilence for a delay node. Won't the default work? Indeed. Removed this function. Created attachment 136100 [details]
Patch
Thanks Jer. I had a quick look through and it seems good. I'd like to have a chance to apply the patch and play with it for awhile on Monday. Comment on attachment 136100 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=136100&action=review Jer, looks great overall - just a couple of minor comments. I applied the patch and it appears to work fine. > Source/WebCore/Modules/webaudio/AudioBufferSourceNode.h:88 > + bool isPlaying() const { return m_playbackState == PLAYING_STATE || m_playbackState == SCHEDULED_STATE; } I would do a slight rename here and call this "isScheduledOrPlaying()" -- otherwise it's not quite accurate and a little confusing > Source/WebCore/Modules/webaudio/AudioNode.cpp:183 > + m_lastNonSilentTime = (context()->currentSampleFrame() + framesToProcess) / static_cast<double>(m_sampleRate); I'm a little unclear why you're doing this check (calling inputsAreSilent()) *before* you call pullInputs() below. It seems like you'd want to query the status of the inputs after they've been pulled. I can see that you add "+ framesToProcess" here so maybe it works out the same. I'm not sure if this is in-correct, but it would be good to have an explanation (and probably a comment) if there's a subtle reason for it. > Source/WebCore/Modules/webaudio/Oscillator.h:78 > + virtual bool propagatesSilence() const OVERRIDE { return FALSE; } nit: FALSE -> false > Source/WebCore/platform/audio/AudioChannel.cpp:61 > + zero(); Isn't this check for !isSilent() before calling zero() unnecessary since the zero() method already checks for this condition? Although the check is harmless, it seems like it would be a little easier to read without it. Comment on attachment 136100 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=136100&action=review >> Source/WebCore/Modules/webaudio/AudioBufferSourceNode.h:88 >> + bool isPlaying() const { return m_playbackState == PLAYING_STATE || m_playbackState == SCHEDULED_STATE; } > > I would do a slight rename here and call this "isScheduledOrPlaying()" -- otherwise it's not quite accurate and a little confusing Sure. Changed. >> Source/WebCore/Modules/webaudio/AudioNode.cpp:183 >> + m_lastNonSilentTime = (context()->currentSampleFrame() + framesToProcess) / static_cast<double>(m_sampleRate); > > I'm a little unclear why you're doing this check (calling inputsAreSilent()) *before* you call pullInputs() below. It seems like you'd want to query the status of the inputs after they've been pulled. > I can see that you add "+ framesToProcess" here so maybe it works out the same. I'm not sure if this is in-correct, but it would be good to have an explanation (and probably a comment) if there's a subtle reason for it. I originally had it that way (see this patch: https://bugs.webkit.org/attachment.cgi?id=130690&action=edit). I modified the patch to add the "+ framesToProcess" because currentSampleFrame() points to the beginning of the frame array, and m_lastNonSilentTime needs to point to the end of the frame array. However, you're right, when I changed that, I moved the m_lastNonSilentTime assignment before the pull(), and it should be after. I'll fix this. >> Source/WebCore/Modules/webaudio/Oscillator.h:78 >> + virtual bool propagatesSilence() const OVERRIDE { return FALSE; } > > nit: FALSE -> false Changed. >> Source/WebCore/platform/audio/AudioChannel.cpp:61 >> + zero(); > > Isn't this check for !isSilent() before calling zero() unnecessary since the zero() method already checks for this condition? > Although the check is harmless, it seems like it would be a little easier to read without it. Sure. Changed. Comment on attachment 136100 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=136100&action=review >> Source/WebCore/Modules/webaudio/AudioBufferSourceNode.h:88 >> + bool isPlaying() const { return m_playbackState == PLAYING_STATE || m_playbackState == SCHEDULED_STATE; } > > I would do a slight rename here and call this "isScheduledOrPlaying()" -- otherwise it's not quite accurate and a little confusing Sure. Changed. >> Source/WebCore/Modules/webaudio/AudioNode.cpp:183 >> + m_lastNonSilentTime = (context()->currentSampleFrame() + framesToProcess) / static_cast<double>(m_sampleRate); > > I'm a little unclear why you're doing this check (calling inputsAreSilent()) *before* you call pullInputs() below. It seems like you'd want to query the status of the inputs after they've been pulled. > I can see that you add "+ framesToProcess" here so maybe it works out the same. I'm not sure if this is in-correct, but it would be good to have an explanation (and probably a comment) if there's a subtle reason for it. I originally had it that way (see this patch: https://bugs.webkit.org/attachment.cgi?id=130690&action=edit). I modified the patch to add the "+ framesToProcess" because currentSampleFrame() points to the beginning of the frame array, and m_lastNonSilentTime needs to point to the end of the frame array. However, you're right, when I changed that, I moved the m_lastNonSilentTime assignment before the pull(), and it should be after. I'll fix this. >> Source/WebCore/Modules/webaudio/Oscillator.h:78 >> + virtual bool propagatesSilence() const OVERRIDE { return FALSE; } > > nit: FALSE -> false Changed. >> Source/WebCore/platform/audio/AudioChannel.cpp:61 >> + zero(); > > Isn't this check for !isSilent() before calling zero() unnecessary since the zero() method already checks for this condition? > Although the check is harmless, it seems like it would be a little easier to read without it. Sure. Changed. Created attachment 136331 [details]
Patch
Comment on attachment 136331 [details] Patch View in context: https://bugs.webkit.org/attachment.cgi?id=136331&action=review Thanks Jer, nice optimizations! I have just one comment which you can address before landing. Or re-upload a new patch as you wish. > Source/WebCore/Modules/webaudio/AudioNode.cpp:183 > + Can we call inputsAreSilent() just one time and cache in a local bool to avoid calling it twice in a row as an optimization? (In reply to comment #62) > (From update of attachment 136331 [details]) > View in context: https://bugs.webkit.org/attachment.cgi?id=136331&action=review > > Thanks Jer, nice optimizations! I have just one comment which you can address before landing. Or re-upload a new patch as you wish. > > > Source/WebCore/Modules/webaudio/AudioNode.cpp:183 > > + > > Can we call inputsAreSilent() just one time and cache in a local bool to avoid calling it twice in a row as an optimization? Sure thing. Thanks! Committed r113728: <http://trac.webkit.org/changeset/113728> |