Bug 34452

Summary: Initial patch for audio engine: AudioBus and helper classes
Product: WebKit Reporter: Chris Rogers <crogers>
Component: PlatformAssignee: Nobody <webkit-unassigned>
Status: RESOLVED FIXED    
Severity: Normal CC: cmarrin, dino, eric.carlson, kbr, michaeln, simon.fraser, webkit.review.bot
Priority: P2    
Version: 528+ (Nightly build)   
Hardware: Mac (Intel)   
OS: OS X 10.5   
Bug Depends on:    
Bug Blocks: 34907    
Attachments:
Description Flags
patch for some initial audio engine support classes
none
patch for some initial audio engine support classes: fixes some minor style problems with the initial patch
none
Patch
none
Patch
none
Patch none

Description Chris Rogers 2010-02-01 15:21:43 PST
This patch represents the first code for a new web audio engine explained here:
https://lists.webkit.org/pipermail/webkit-dev/2010-February/011508.html

This first patch includes the helper classes: AudioFloatArray, AudioDoubleArray, AudioChannel, AudioBus
which are extensively used by the rest of the engine

There are no javascript APIs yet to directly call this code so no layout tests.
Comment 1 Chris Rogers 2010-02-01 15:43:51 PST
Created attachment 47874 [details]
patch for some initial audio engine support classes

The AudioChannel, AudioBus, AudioFloatArray, and AudioDoubleArray classes are fundamental building blocks for the audio engine.
Comment 2 WebKit Review Bot 2010-02-01 15:46:46 PST
Attachment 47874 [details] did not pass style-queue:

Failed to run "WebKitTools/Scripts/check-webkit-style" exit_code: 1
WebCore/platform/audio/AudioFloatArray.h:46:  Code inside a namespace should not be indented.  [whitespace/indent] [4]
WebCore/platform/audio/AudioFloatArray.h:69:  One space before end of line comments  [whitespace/comments] [5]
WebCore/platform/audio/AudioBus.cpp:187:  More than one command on the same line in if  [whitespace/parens] [4]
WebCore/platform/audio/AudioBus.cpp:214:  More than one command on the same line in if  [whitespace/parens] [4]
WebCore/platform/audio/AudioBus.cpp:301:  More than one command on the same line in if  [whitespace/parens] [4]
WebCore/platform/audio/AudioBus.h:43:  Code inside a namespace should not be indented.  [whitespace/indent] [4]
Total errors found: 6


If any of these errors are false positives, please file a bug against check-webkit-style.
Comment 3 Chris Rogers 2010-02-01 17:17:17 PST
Created attachment 47884 [details]
patch for some initial audio engine support classes: fixes some minor style problems with the initial patch

fixes some minor style problems with the initial patch
Comment 4 Simon Fraser (smfr) 2010-02-01 17:37:58 PST
The naming of "AudioChannel" and "AudioBus" are a little confusing to me. AudioChannel to me represents some higher-level concept ("the left channel"); it seems wrong for it to own samples. Would it be better to have an AudioSampleBuffer object that has an AudioDescription that says if the contents are interleaved, for example?
Comment 5 Chris Rogers 2010-02-01 20:38:52 PST
Hi Simon, thanks for having a look.  If you think about it in CoreAudio terms, the AudioChannel is very similar to the "AudioBuffer" type in CoreAudio.  The AudioBus is similar to "AudioBufferList".  But, of course, since these are C++ classes they are somewhat easier to work with than the CoreAudio versions.  Please note that these data types are used internally in the engine implementation and are not expected to be exposed through the javascript API, so I don't think we'll confuse our javascript developers with the naming conventions.  For audio software engineers, a "bus" is a very common term in audio engineering and is used to denote a grouping of "channels".  In schematic diagrams for audio devices they are often shown as a line with a small diagonal slash. From my perspective as an audio engineer, these terms are not at all surprising and I think they make a lot of sense.

In the current engine code and any code I can envision for the future there will be no place for interleaved audio.  Non-interleaved (planar) Float32 PCM (nominal -1 -> +1) will be the canonical format.  The file-loading code I have written already takes care of the details of getting interleaved file data loaded into the right format.  All of the AudioUnits I wrote at Apple conformed to this format.

The AudioChannel and AudioBus classes have served very well as building blocks for the engine so far, and I'm very confident they will be sufficient to support future engine improvements like the modular routing we talked about.
Comment 6 Chris Marrin 2010-02-04 09:12:40 PST
I don't like the fact that there is an AudioFloatArray/AudioDoubleArray here. I thought you were using the WebGL buffer classes? We are looking at moving these to a separate spec, which would mean they would get a more generic name. But for now you should use WebGLFloatArray. You should also create a WebGLDoubleArray, which I believe will makes its way into the ArrayBuffer spec. Later on we can change the names as needed.
Comment 7 Chris Rogers 2010-02-04 11:35:55 PST
Hi Chris, we'll definitely use WebGLFloatArray and friends when exposing anything to javascript as we discussed.  I really do like this idea.  In fact, I'm already using one for the realtime spectrum analyser.

But AudioFloatArray is really an "under-the-hood" lower-level building block (not exposed to javascript) and needs to be directly useable as a stack-based object and even more importantly directly embeddable as an ivar (object composition).  WebGLFloatArray needs to be created explicitly by calling create(), making it not useable in quite such a simple way.  I'm not saying it's impossible to write code using WebGLFloatArray, just that in the particular cases which appear frequently in audio processing code, it's a bit heavier.
Comment 8 Chris Marrin 2010-02-04 15:36:54 PST
I see the need for a lighter weight array object. But I don't see the need for these types. They don't really do anything that the underlying Vector object does. So why not just use Vector directly?
Comment 9 Chris Rogers 2010-02-04 15:43:11 PST
I have a comment in AudioFloatArray.h:

// The main feature AudioFloatArray implements beyond Vector<float> is the ability
// to zero out the float array very efficiently using all the optimizations in memset()
// This is important for real-time audio algorithms (for example, convolution reverb)
// where performance is critical.
Comment 10 Chris Marrin 2010-02-05 07:26:24 PST
But you can use memset without wrapping it in an entire class. That seems like overkill.
Comment 11 Chris Rogers 2010-02-05 11:41:47 PST
Overkill?  I guess that's a matter of opinion.

What's better?

floatBuffer.zero();

or

memset(floatBuffer.data(), 0, sizeof(float) * floatBuffer.size());

Then, you have to remember whether it's sizeof(float) or sizeof(double)
depending on whether it's a float of double array...

This method is used a fair bit throughout the engine code, and I think it much better to have a concise method for this.
Comment 12 Chris Rogers 2010-02-05 12:05:29 PST
Instead of subclassing, I could add the zero() (with memset) method directly to the Vector class.  Technically, this is not quite correct since it's possible to have a vector of objects where calling zero() might put the objects into a bad state, destructors not getting called, etc.  But I think it would be OK, since people are generally going to be aware of these issues and would not have a use for calling zero() in these cases anyway.

Another option we can consider...
Comment 13 Adam Barth 2010-03-08 16:42:37 PST
Comment on attachment 47884 [details]
patch for some initial audio engine support classes: fixes some minor style problems with the initial patch

Please add the zero() method to Vector instead of creating these subclasses.  You can use a trait to ensure that the method can't be used on a type that causes problems.
Comment 14 Chris Rogers 2010-08-27 14:20:02 PDT
Created attachment 65766 [details]
Patch
Comment 15 Chris Rogers 2010-08-27 14:24:23 PDT
I've put in  a patch for wtf/Vector to add the zero() method which is still being sorted through:
https://bugs.webkit.org/show_bug.cgi?id=36849

But in the meantime, although this patch is not yet quite perfect (still need to get rid of my use of AudioFloatArray and use Vector instead), it would be good to get the rest of the content reviewed.
Comment 16 Kenneth Russell 2010-08-30 16:15:39 PDT
Comment on attachment 65766 [details]
Patch

The core mixing logic, including adjustment of the mix gain, looks fine, but there are some structural problems for which I'm marking this r-. See below.

> Index: WebCore/platform/audio/Accelerate.cpp
> ===================================================================
> --- WebCore/platform/audio/Accelerate.cpp	(revision 0)
> +++ WebCore/platform/audio/Accelerate.cpp	(revision 0)
> @@ -0,0 +1,70 @@
> +/*
> + * Copyright (C) 2010 Google Inc. All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * 1.  Redistributions of source code must retain the above copyright
> + *     notice, this list of conditions and the following disclaimer.
> + * 2.  Redistributions in binary form must reproduce the above copyright
> + *     notice, this list of conditions and the following disclaimer in the
> + *     documentation and/or other materials provided with the distribution.
> + * 3.  Neither the name of Apple Computer, Inc. ("Apple") nor the names of
> + *     its contributors may be used to endorse or promote products derived
> + *     from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY APPLE AND ITS CONTRIBUTORS "AS IS" AND ANY
> + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
> + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
> + * DISCLAIMED. IN NO EVENT SHALL APPLE OR ITS CONTRIBUTORS BE LIABLE FOR ANY
> + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
> + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
> + * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +// On the Mac use the highly optimized versions in Accelerate.framework
> +
> +#include "config.h"
> +
> +#if ENABLE(WEB_AUDIO)
> +
> +#include "Accelerate.h"
> +
> +#if !OS(DARWIN)
> +
> +void vsmul(const float* sourceP, int sourceStride, const float* scale, float* destP, int destStride, size_t framesToProcess)
> +{
> +    // FIXME: optimize for SSE
> +    int n = framesToProcess;
> +    float k = *scale;
> +    while (n--) {
> +        float sample = *sourceP;
> +        *destP = k * sample;
> +
> +        sourceP += sourceStride;
> +        destP += destStride;
> +    }
> +}
> +
> +void vadd(const float* source1P, int sourceStride1, const float* source2P, int sourceStride2, float* destP, int destStride, size_t framesToProcess)
> +{
> +    // FIXME: optimize for SSE
> +    int n = framesToProcess;
> +    while (n--) {
> +        float sample1 = *source1P;
> +        float sample2 = *source2P;
> +        *destP = sample1 + sample2;
> +
> +        source1P += sourceStride1;
> +        source2P += sourceStride2;
> +        destP += destStride;
> +    }
> +}
> +
> +#endif // !OS(DARWIN)
> +
> +#endif // ENABLE(WEB_AUDIO)
> Index: WebCore/platform/audio/Accelerate.h
> ===================================================================
> --- WebCore/platform/audio/Accelerate.h	(revision 0)
> +++ WebCore/platform/audio/Accelerate.h	(revision 0)
> @@ -0,0 +1,53 @@
> +/*
> + * Copyright (C) 2010 Google Inc. All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * 1.  Redistributions of source code must retain the above copyright
> + *     notice, this list of conditions and the following disclaimer.
> + * 2.  Redistributions in binary form must reproduce the above copyright
> + *     notice, this list of conditions and the following disclaimer in the
> + *     documentation and/or other materials provided with the distribution.
> + * 3.  Neither the name of Apple Computer, Inc. ("Apple") nor the names of
> + *     its contributors may be used to endorse or promote products derived
> + *     from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY APPLE AND ITS CONTRIBUTORS "AS IS" AND ANY
> + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
> + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
> + * DISCLAIMED. IN NO EVENT SHALL APPLE OR ITS CONTRIBUTORS BE LIABLE FOR ANY
> + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
> + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
> + * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef Accelerate_h
> +#define Accelerate_h
> +
> +// Defines the interface for several vector math functions whose implementation will ideally be optimized
> +
> +#if OS(DARWIN)
> +// On the Mac use the highly optimized versions in Accelerate.framework
> +#include <Accelerate/Accelerate.h>
> +
> +#ifndef vadd
> +#define vadd vDSP_vadd
> +#endif
> +
> +#ifndef vsmul
> +#define vsmul vDSP_vsmul
> +#endif
> +
> +
> +#else
> +void vsmul(const float* sourceP, int sourceStride, const float* scale, float* destP, int destStride, size_t framesToProcess);
> +void vadd(const float* source1P, int sourceStride1, const float* source2P, int sourceStride2, float* destP, int destStride, size_t framesToProcess);

I think you should consider exposing these routines to the rest of WebCore either as static functions on a class or in a sub-namespace Accelerate, for better encapsulation. My opinion only though.

> +#endif
> +
> +#endif // Accelerate_h
> Index: WebCore/platform/audio/AudioBus.cpp
> ===================================================================
> --- WebCore/platform/audio/AudioBus.cpp	(revision 0)
> +++ WebCore/platform/audio/AudioBus.cpp	(revision 0)
> @@ -0,0 +1,425 @@
> +/*
> + * Copyright (C) 2010 Google Inc. All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * 1.  Redistributions of source code must retain the above copyright
> + *     notice, this list of conditions and the following disclaimer.
> + * 2.  Redistributions in binary form must reproduce the above copyright
> + *     notice, this list of conditions and the following disclaimer in the
> + *     documentation and/or other materials provided with the distribution.
> + * 3.  Neither the name of Apple Computer, Inc. ("Apple") nor the names of
> + *     its contributors may be used to endorse or promote products derived
> + *     from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY APPLE AND ITS CONTRIBUTORS "AS IS" AND ANY
> + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
> + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
> + * DISCLAIMED. IN NO EVENT SHALL APPLE OR ITS CONTRIBUTORS BE LIABLE FOR ANY
> + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
> + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
> + * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include "config.h"
> +
> +#if ENABLE(WEB_AUDIO)
> +
> +#include "AudioBus.h"
> +
> +#include "Accelerate.h"
> +#include <assert.h>
> +#include <math.h>
> +#include <wtf/OwnPtr.h>
> +
> +namespace WebCore {
> +
> +void AudioChannel::scale(double scale)
> +{
> +    float s = static_cast<float>(scale);
> +    vsmul(data(), 1, &s, data(), 1, frameSize());

frameSize() is very poorly named. To a person just coming in to this code it sounds like the size in samples of one frame, but it's actually the number of frames. Please change it to something more descriptive like numberOfFrames().

> +}
> +
> +void AudioChannel::copyFrom(const AudioChannel& sourceChannel)
> +{
> +    bool isSafe = (sourceChannel.frameSize() >= frameSize());
> +    ASSERT(isSafe);
> +    if (!isSafe)
> +        return;
> +
> +    memcpy(data(), sourceChannel.constData(), sizeof(float) * frameSize());
> +}
> +
> +void AudioChannel::sumFrom(const AudioChannel& sourceChannel)
> +{
> +    bool isSafe = (sourceChannel.frameSize() >= frameSize());
> +    ASSERT(isSafe);
> +    if (!isSafe)
> +        return;
> +
> +    vadd(data(), 1, sourceChannel.constData(), 1, data(), 1, frameSize());
> +}
> +
> +float AudioChannel::maxAbsValue() const
> +{
> +    const float* p = constData();
> +    int n = frameSize();
> +
> +    float max = 0.0f;
> +    while (n--) {
> +        float value = *p++;
> +        float absValue = fabsf(value);
> +        if (absValue > max)
> +            max = absValue;
> +    }
> +
> +    return max;
> +}
> +
> +AudioBus::AudioBus(unsigned numberOfChannels, size_t frameSize, bool allocate)
> +    : m_frameSize(frameSize)
> +    , m_busGain(1.0)
> +    , m_isFirstTime(true)
> +    , m_sampleRate(0.0)
> +{
> +    m_channels.reserveInitialCapacity(numberOfChannels);
> +
> +    for (unsigned i = 0; i < numberOfChannels; ++i) {
> +        AudioChannel* channel = allocate ? new AudioChannel(frameSize) : new AudioChannel(0, frameSize);

Naked operator new. This should use adoptPtr if at all possible (and I think it is).

> +        m_channels.append(channel);
> +    }
> +
> +    m_layout = LayoutCanonical; // for now this is the only layout we define
> +}
> +
> +void AudioBus::setChannelMemory(unsigned channelIndex, float* storage, size_t frameSize)
> +{
> +    if (channelIndex < m_channels.size()) {
> +        channel(channelIndex)->set(storage, frameSize);
> +        m_frameSize = frameSize;
> +    }
> +}
> +
> +void AudioBus::zero()
> +{
> +    for (unsigned i = 0; i < m_channels.size(); ++i)
> +        m_channels[i]->zero();
> +}
> +
> +AudioChannel* AudioBus::channel(unsigned channel)
> +{
> +    if (channel < m_channels.size())
> +        return m_channels[channel].get();
> +
> +    return 0;

Why does this function attempt to be safe? Is it expected that callers may pass in out-of-range channel numbers? If not, just remove the check and let Vector's ASSERT take care of it.

> +}
> +
> +AudioChannel* AudioBus::channelByType(unsigned channelType)
> +{
> +    // For now we only support canonical channel layouts...
> +    if (m_layout != LayoutCanonical)
> +        return 0;
> +
> +    switch (numberOfChannels()) {
> +    case 1: // mono
> +        if (channelType == ChannelMono || channelType == ChannelLeft)
> +            return channel(0);
> +        return 0;
> +
> +    case 2: // stereo
> +        switch (channelType) {
> +        case ChannelLeft: return channel(0);
> +        case ChannelRight: return channel(1);
> +        default: return 0;
> +        }
> +
> +    case 4: // quad
> +        switch (channelType) {
> +        case ChannelLeft: return channel(0);
> +        case ChannelRight: return channel(1);
> +        case ChannelSurroundLeft: return channel(2);
> +        case ChannelSurroundRight: return channel(3);
> +        default: return 0;
> +        }
> +
> +    case 5: // 5.0
> +        switch (channelType) {
> +        case ChannelLeft: return channel(0);
> +        case ChannelRight: return channel(1);
> +        case ChannelCenter: return channel(2);
> +        case ChannelSurroundLeft: return channel(3);
> +        case ChannelSurroundRight: return channel(4);
> +        default: return 0;
> +        }
> +
> +    case 6: // 5.1
> +        switch (channelType) {
> +        case ChannelLeft: return channel(0);
> +        case ChannelRight: return channel(1);
> +        case ChannelCenter: return channel(2);
> +        case ChannelLFE: return channel(3);
> +        case ChannelSurroundLeft: return channel(4);
> +        case ChannelSurroundRight: return channel(5);
> +        default: return 0;
> +        }
> +
> +    default: // unknown canonical layout
> +        return 0;

This default case clause is unnecessary. If you want to catch invalid uses in debug builds, use ASSERT_NOT_REACHED.

> +    }
> +    
> +    return 0;
> +}
> +
> +// Returns true if the channel count and frame-size match.
> +bool AudioBus::topologyMatches(const AudioBus& bus) const
> +{
> +    if (numberOfChannels() != bus.numberOfChannels())
> +        return false; // channel mismatch
> +
> +    // Make sure source bus has enough frames.
> +    if (frameSize() > bus.frameSize())
> +        return false; // frame-size mismatch
> +
> +    return true;
> +}
> +
> +// Just copies the samples from the source bus to this one.
> +// This is just a simple copy if the number of channels match, otherwise a mixup or mixdown is done.
> +// For now, we just support a mixup from mono -> stereo.
> +void AudioBus::copyFrom(const AudioBus& sourceBus)
> +{
> +    if (&sourceBus == this)
> +        return;
> +
> +    size_t frameSize = sourceBus.frameSize();
> +
> +    if (numberOfChannels() == sourceBus.numberOfChannels()) {
> +        // Simple copy
> +        for (unsigned i = 0; i < numberOfChannels(); ++i) {
> +            AudioChannel* sourceChannel = const_cast<AudioBus&>(sourceBus).channel(i);
> +            channel(i)->copyFrom(*sourceChannel);
> +        }
> +    } else if (numberOfChannels() == 2 && sourceBus.numberOfChannels() == 1) {
> +        // Handle mono -> stereo case (for now simply copy mono channel into both left and right)
> +        // FIXME: Really we should apply an equal-power scaling factor here, since we're effectively panning center...
> +        const float* source = const_cast<AudioBus&>(sourceBus).channel(0)->data();
> +        float* destinationL = channel(0)->data();
> +        float* destinationR = channel(1)->data();
> +        memcpy(destinationL, source, sizeof(float) * frameSize);
> +        memcpy(destinationR, source, sizeof(float) * frameSize);

Where are there checks that there is enough storage allocated in channel(0) and channel(1)? Also, it seems to me that these memory copies should be in AudioChannel, not here.

> +    } else {
> +        // Case not handled
> +        ASSERT_NOT_REACHED();
> +    }
> +}
> +
> +PassOwnPtr<AudioBus> AudioBus::createBufferFromRange(AudioBus* sourceBuffer, unsigned startFrame, unsigned endFrame)
> +{
> +    size_t numberOfSourceFrames = sourceBuffer->frameSize();
> +    unsigned numberOfChannels = sourceBuffer->numberOfChannels();
> +
> +    // Sanity checking
> +    bool isRangeSafe = startFrame < endFrame && endFrame <= numberOfSourceFrames;
> +
> +    ASSERT(isRangeSafe);
> +    if (!isRangeSafe)
> +        return 0;
> +
> +    size_t rangeSize = endFrame - startFrame;
> +
> +    PassOwnPtr<AudioBus> audioBus = new AudioBus(numberOfChannels, rangeSize);
> +    audioBus->setSampleRate(sourceBuffer->sampleRate());
> +
> +    for (unsigned i = 0; i < numberOfChannels; ++i) {
> +        float* source = sourceBuffer->channel(i)->data();
> +        float* destination = audioBus->channel(i)->data();
> +        memcpy(destination, source + startFrame, sizeof(float) * rangeSize);

Same comment about encapsulating these memcpy's in AudioChannel.

> +    }
> +
> +    return audioBus;
> +}
> +
> +float AudioBus::maxAbsValue() const
> +{
> +    float max = 0.0f;
> +    for (unsigned i = 0; i < numberOfChannels(); ++i) {
> +        AudioChannel* channel = const_cast<AudioBus*>(this)->channel(i);
> +        float channelMax = channel->maxAbsValue();
> +        if (channelMax > max)
> +            max = channelMax;
> +    }
> +
> +    return max;
> +}
> +
> +void AudioBus::normalize()
> +{
> +    float max = maxAbsValue();
> +    if (max)
> +        scale(1.0f / max);
> +}
> +
> +void AudioBus::scale(double scale)
> +{
> +    for (unsigned i = 0; i < numberOfChannels(); ++i)
> +        channel(i)->scale(scale);
> +}
> +
> +void AudioBus::sumFrom(const AudioBus &sourceBus)
> +{
> +    if (numberOfChannels() == sourceBus.numberOfChannels()) {
> +        for (unsigned i = 0; i < numberOfChannels(); ++i)
> +            channel(i)->sumFrom(*const_cast<AudioBus&>(sourceBus).channel(i));

This const_cast is pretty ugly. You ought to be able to provide enough overloaded const variants of methods on AudioBus and AudioChannel to avoid it. Also, it isn't clear to me why you are introducing an asymmetry in the API where channel() returns AudioBus* but these routines take AudioBus&.

> +    } else if (numberOfChannels() == 2 && sourceBus.numberOfChannels() == 1) {
> +        // Handle mono -> stereo case (for now simply sum mono channel into both left and right)
> +        // FIXME: Really we should apply an equal-power scaling factor here, since we're effectively panning center...
> +        AudioChannel* sourceChannel = const_cast<AudioBus&>(sourceBus).channel(0);
> +        channel(0)->sumFrom(*sourceChannel);
> +        channel(1)->sumFrom(*sourceChannel);

Same comments about const_cast and dereferencing.

> +    } else {
> +        // Case not handled
> +        ASSERT_NOT_REACHED();
> +    }
> +}
> +
> +void AudioBus::processWithGainFromMonoStereo(const AudioBus &sourceBus, double* lastMixGain, double targetGain, bool sumToBus)
> +{
> +    // We don't want to suddenly change the gain from mixing one time slice to the next,
> +    // so we "de-zipper" by slowly changing the gain each sample-frame until we've achieved the target gain.
> +
> +    // FIXME: optimize this method (SSE, etc.)
> +    // FIXME: Need fast path here when gain has converged on targetGain. In this case, de-zippering is no longer needed.
> +    // FIXME: Need fast path when this==sourceBus && lastMixGain==targetGain==1.0 && sumToBus==false (this is a NOP)
> +
> +    // Take master bus gain into account as well as the targetGain.
> +    double totalDesiredGain = m_busGain * targetGain;
> +
> +    // First time, snap directly to totalDesiredGain.
> +    double gain = m_isFirstTime ? totalDesiredGain : *lastMixGain;
> +    m_isFirstTime = false;
> +
> +    int numberOfSourceChannels = sourceBus.numberOfChannels();
> +    int numberOfDestinationChannels = numberOfChannels();
> +
> +    AudioBus& sourceBusSafe = const_cast<AudioBus&>(sourceBus);
> +    const float* sourceL = sourceBusSafe.channelByType(ChannelLeft)->data();
> +    const float* sourceR = numberOfSourceChannels > 1 ? sourceBusSafe.channelByType(ChannelRight)->data() : 0;
> +
> +    float* destinationL = channelByType(ChannelLeft)->data();
> +    float* destinationR = numberOfDestinationChannels > 1 ? channelByType(ChannelRight)->data() : 0;
> +
> +    const double DezipperRate = 0.005;
> +    int framesToProcess = frameSize();
> +
> +    if (sumToBus) {
> +        // Sum to our bus
> +        if (sourceR && destinationR) {
> +            // Stereo
> +            while (framesToProcess--) {
> +                float sampleL = *sourceL++;
> +                float sampleR = *sourceR++;
> +                *destinationL++ += static_cast<float>(gain * sampleL);
> +                *destinationR++ += static_cast<float>(gain * sampleR);
> +
> +                // Slowly change gain to desired gain.
> +                gain += (totalDesiredGain - gain) * DezipperRate;
> +            }
> +        } else if (destinationR) {
> +            // Mono -> stereo (mix equally into L and R)
> +            // FIXME: Really we should apply an equal-power scaling factor here, since we're effectively panning center...
> +            while (framesToProcess--) {
> +                float sample = *sourceL++;
> +                *destinationL++ += static_cast<float>(gain * sample);
> +                *destinationR++ += static_cast<float>(gain * sample);
> +
> +                // Slowly change gain to desired gain.
> +                gain += (totalDesiredGain - gain) * DezipperRate;
> +            }
> +        } else {
> +            // Mono
> +            while (framesToProcess--) {
> +                float sampleL = *sourceL++;
> +                *destinationL++ += static_cast<float>(gain * sampleL);
> +
> +                // Slowly change gain to desired gain.
> +                gain += (totalDesiredGain - gain) * DezipperRate;
> +            }
> +        }
> +    } else {
> +        // Process directly (without summing) to our bus
> +        if (sourceR && destinationR) {
> +            // Stereo
> +            while (framesToProcess--) {
> +                float sampleL = *sourceL++;
> +                float sampleR = *sourceR++;
> +                *destinationL++ = static_cast<float>(gain * sampleL);
> +                *destinationR++ = static_cast<float>(gain * sampleR);
> +
> +                // Slowly change gain to desired gain.
> +                gain += (totalDesiredGain - gain) * DezipperRate;
> +            }
> +        } else if (destinationR) {
> +            // Mono -> stereo (mix equally into L and R)
> +            // FIXME: Really we should apply an equal-power scaling factor here, since we're effectively panning center...
> +            while (framesToProcess--) {
> +                float sample = *sourceL++;
> +                *destinationL++ = static_cast<float>(gain * sample);
> +                *destinationR++ = static_cast<float>(gain * sample);
> +
> +                // Slowly change gain to desired gain.
> +                gain += (totalDesiredGain - gain) * DezipperRate;
> +            }
> +        } else {
> +            // Mono
> +            while (framesToProcess--) {
> +                float sampleL = *sourceL++;
> +                *destinationL++ = static_cast<float>(gain * sampleL);
> +
> +                // Slowly change gain to desired gain.
> +                gain += (totalDesiredGain - gain) * DezipperRate;
> +            }
> +        }
> +    }
> +
> +    // Save the target gain as the starting point for next time around.
> +    *lastMixGain = gain;
> +}
> +
> +void AudioBus::processWithGainFrom(const AudioBus &sourceBus, double* lastMixGain, double targetGain, bool sumToBus)
> +{
> +    // Make sure we're summing from same type of bus.
> +    // We *are* able to sum from mono -> stereo
> +    if (sourceBus.numberOfChannels() != 1 && !topologyMatches(sourceBus))
> +        return;
> +
> +    // Dispatch for different channel layouts
> +    switch (numberOfChannels()) {
> +    case 1: // mono
> +    case 2: // stereo
> +        processWithGainFromMonoStereo(sourceBus, lastMixGain, targetGain, sumToBus);
> +        break;
> +    case 4: // FIXME: implement quad
> +    case 5: // FIXME: implement 5.0
> +    default:
> +        ASSERT_NOT_REACHED();
> +        break;
> +    }
> +}
> +
> +void AudioBus::sumWithGainFrom(const AudioBus &sourceBus, double* lastMixGain, double targetGain)
> +{
> +    processWithGainFrom(sourceBus, lastMixGain, targetGain, true);
> +}
> +
> +void AudioBus::copyWithGainFrom(const AudioBus &sourceBus, double* lastMixGain, double targetGain)
> +{
> +    processWithGainFrom(sourceBus, lastMixGain, targetGain, false);
> +}
> +
> +} // WebCore
> +
> +#endif // ENABLE(WEB_AUDIO)
> Index: WebCore/platform/audio/AudioBus.h
> ===================================================================
> --- WebCore/platform/audio/AudioBus.h	(revision 0)
> +++ WebCore/platform/audio/AudioBus.h	(revision 0)
> @@ -0,0 +1,215 @@
> +/*
> + * Copyright (C) 2010 Google Inc. All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * 1.  Redistributions of source code must retain the above copyright
> + *     notice, this list of conditions and the following disclaimer.
> + * 2.  Redistributions in binary form must reproduce the above copyright
> + *     notice, this list of conditions and the following disclaimer in the
> + *     documentation and/or other materials provided with the distribution.
> + * 3.  Neither the name of Apple Computer, Inc. ("Apple") nor the names of
> + *     its contributors may be used to endorse or promote products derived
> + *     from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY APPLE AND ITS CONTRIBUTORS "AS IS" AND ANY
> + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
> + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
> + * DISCLAIMED. IN NO EVENT SHALL APPLE OR ITS CONTRIBUTORS BE LIABLE FOR ANY
> + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
> + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
> + * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef AudioBus_h
> +#define AudioBus_h
> +
> +#include "AudioFloatArray.h"
> +#include <wtf/Noncopyable.h>
> +#include <wtf/PassOwnPtr.h>
> +#include <wtf/Vector.h>
> +
> +namespace WebCore {
> +
> +// An AudioChannel represents a buffer of non-interleaved floating-point audio samples.
> +// The PCM samples are normally assumed to be in a nominal range -1.0 -> +1.0
> +class AudioChannel : public Noncopyable {

AudioChannel should be in its own header and have its own implementation file.

> +public:
> +    // Memory can be externally referenced, or can be internally allocated with an AudioFloatArray.
> +
> +    // Reference an external buffer.
> +    AudioChannel(float* storage, size_t frameSize)
> +        : m_frameSize(frameSize), m_rawPointer(storage), m_memBuffer(0) { }
> +
> +    // Manage storage for us.
> +    explicit AudioChannel(size_t frameSize)
> +        : m_frameSize(frameSize)
> +        , m_rawPointer(0)
> +    {
> +        m_memBuffer = new AudioFloatArray(frameSize);
> +    }
> +
> +    // A "blank" audio channel -- must call set() before it's useful...
> +    AudioChannel()
> +        : m_frameSize(0)
> +        , m_rawPointer(0)
> +        , m_memBuffer(0)
> +    {
> +    }
> +
> +    // Redefine the memory for this channel.
> +    // storage represents external memory not managed by this object.
> +    void set(float* storage, size_t frameSize)
> +    {
> +        m_memBuffer.clear(); // cleanup managed storage
> +        m_rawPointer = storage;
> +        m_frameSize = frameSize;
> +    }
> +
> +    // How many sample-frames do we contain?
> +    size_t frameSize() const { return m_frameSize; }

As mentioned above, this method is very poorly named. Please change it. Suggestion: numberOfFrames().

> +
> +    // Direct access to PCM sample data
> +    float* data() { return m_rawPointer ? m_rawPointer : m_memBuffer->data(); }
> +    const float* constData() const { return const_cast<AudioChannel*>(this)->data(); }

You should be able to provide a const overload of AudioFloatArray::data() to avoid this const_cast. You also shouldn't need the separately named constData(), just a const overload of data() returning const float*.

> +
> +    // Zeroes out all sample values in buffer.
> +    void zero()
> +    {
> +        if (m_memBuffer.get())
> +            m_memBuffer->zero();
> +        else
> +            memset(m_rawPointer, 0, sizeof(float) * m_frameSize);
> +    }
> +
> +    // Scales all samples by the same amount.
> +    void scale(double scale);
> +
> +    // A simple memcpy() from the source channel
> +    void copyFrom(const AudioChannel &sourceChannel);
> +
> +    // Sums (with unity gain) from the source channel.
> +    void sumFrom(const AudioChannel &sourceChannel);
> +
> +    // Returns maximum absolute value (useful for normalization).
> +    float maxAbsValue() const;
> +
> +private:
> +    size_t m_frameSize;
> +
> +    float* m_rawPointer;
> +    OwnPtr<AudioFloatArray> m_memBuffer;
> +};
> +
> +// An AudioBus represents a collection of one or more AudioChannels.
> +// The data layout is "planar" as opposed to "interleaved".
> +// An AudioBus with one channel is mono, an AudioBus with two channels is stereo, etc.
> +class AudioBus  : public Noncopyable {
> +public:
> +    enum {
> +        ChannelLeft = 0,
> +        ChannelRight = 1,
> +        ChannelCenter = 2, // center and mono are the same
> +        ChannelMono = 2,
> +        ChannelLFE = 3,
> +        ChannelSurroundLeft = 4,
> +        ChannelSurroundRight = 5,
> +    };
> +
> +    enum {
> +        LayoutCanonical = 0
> +        // Can define non-standard layouts here
> +    };
> +
> +    // allocate indicates whether or not to initially have the AudioChannels created with managed storage.
> +    // Normal usage is to pass true here, in which case the AudioChannels will memory-manage their own storage.
> +    // If allocate is false then setChannelMemory() has to be called later on for each channel before the AudioBus is useable...
> +    AudioBus(unsigned numberOfChannels, size_t frameSize, bool allocate = true);
> +
> +    // Tells the given channel to use an externally allocated buffer.
> +    void setChannelMemory(unsigned channelIndex, float* storage, size_t frameSize);
> +
> +    // Channels
> +    unsigned numberOfChannels() const { return m_channels.size(); }
> +    AudioChannel* channel(unsigned channel);
> +    AudioChannel* channelByType(unsigned type);
> +
> +    // Number of sample-frames
> +    size_t frameSize() const { return m_frameSize; }
> +
> +    // Sample-rate : 0.0 if unknown or "don't care"
> +    double sampleRate() const { return m_sampleRate; }
> +    void setSampleRate(double sampleRate) { m_sampleRate = sampleRate; }
> +
> +    // Zeroes all channels.
> +    void zero();
> +
> +    // Returns true if the channel count and frame-size match.
> +    bool topologyMatches(const AudioBus &sourceBus) const;
> +
> +    // Assuming sourceBus has the same topology, copies sample data from each channel of sourceBus to our corresponding channel.
> +    void copyFrom(const AudioBus &sourceBus);
> +
> +    // Creates a new buffer from a range in the source buffer.
> +    // 0 may be returned if the range does not fit in the sourceBuffer
> +    static PassOwnPtr<AudioBus> createBufferFromRange(AudioBus* sourceBuffer, unsigned startFrame, unsigned endFrame);
> +
> +    // Scales all samples by the same amount.
> +    void scale(double scale);
> +
> +    // Master gain for this bus - used with sumWithGainFrom() below
> +    void setGain(double gain) { m_busGain = gain; }
> +    double gain() { return m_busGain; }
> +
> +    void reset() { m_isFirstTime = true; } // for de-zippering
> +
> +    // Sums the sourceBus into our bus with unity gain.
> +    // Our own internal gain m_busGain is ignored.
> +    void sumFrom(const AudioBus &sourceBus);
> +
> +    // Sum or copy each channel from sourceBus into our corresponding channel.
> +    // We scale by targetGain (and our own internal gain m_busGain), performing "de-zippering" to smoothly change from *lastMixGain to (targetGain*m_busGain).
> +    // The caller is responsible for setting up lastMixGain to point to storage which is unique for every "stream" which will be summed to this bus.
> +    // This represents the dezippering memory.
> +    void sumWithGainFrom(const AudioBus &sourceBus, double* lastMixGain, double targetGain);
> +    void copyWithGainFrom(const AudioBus &sourceBus, double* lastMixGain, double targetGain);
> +
> +    // Returns maximum absolute value across all channels (useful for normalization).
> +    float maxAbsValue() const;
> +
> +    // Makes maximum absolute value == 1.0 (if possible).
> +    void normalize();
> +
> +protected:
> +    AudioBus() { };
> +
> +    void processWithGainFrom(const AudioBus &sourceBus, double* lastMixGain, double targetGain, bool sumToBus);
> +    void processWithGainFromMonoStereo(const AudioBus &sourceBus, double* lastMixGain, double targetGain, bool sumToBus);
> +
> +    size_t m_frameSize;
> +
> +    Vector<OwnPtr<AudioChannel> > m_channels;
> +
> +    int m_layout;
> +
> +    double m_busGain;
> +    bool m_isFirstTime;
> +    double m_sampleRate; // 0.0 if unknown or N/A
> +};
> +
> +// Abstract base-class for a pull-model client.
> +// provideInput() gets called repeatedly to render time-slices of a continuous audio stream.
> +class AudioSourceProvider {

This class should be in its own header as well.

> +public:
> +    virtual void provideInput(AudioBus* bus, size_t framesToProcess) = 0;
> +    virtual ~AudioSourceProvider() { }
> +};
> +
> +} // WebCore
> +
> +#endif // AudioBus_h
Comment 17 Chris Rogers 2010-08-30 18:42:12 PDT
Created attachment 65995 [details]
Patch
Comment 18 Kenneth Russell 2010-08-31 18:22:38 PDT
Comment on attachment 65995 [details]
Patch

This looks good to me; r=me. A couple of minor issues which you can fix upon commit if you're using webkit-patch land, or upload a new patch if you're going to use the commit queue.


> Index: WebCore/platform/audio/AudioBus.cpp
> ===================================================================
> --- WebCore/platform/audio/AudioBus.cpp	(revision 0)
> +++ WebCore/platform/audio/AudioBus.cpp	(revision 0)
> @@ -0,0 +1,364 @@
> +/*
> + * Copyright (C) 2010 Google Inc. All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * 1.  Redistributions of source code must retain the above copyright
> + *     notice, this list of conditions and the following disclaimer.
> + * 2.  Redistributions in binary form must reproduce the above copyright
> + *     notice, this list of conditions and the following disclaimer in the
> + *     documentation and/or other materials provided with the distribution.
> + * 3.  Neither the name of Apple Computer, Inc. ("Apple") nor the names of
> + *     its contributors may be used to endorse or promote products derived
> + *     from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY APPLE AND ITS CONTRIBUTORS "AS IS" AND ANY
> + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
> + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
> + * DISCLAIMED. IN NO EVENT SHALL APPLE OR ITS CONTRIBUTORS BE LIABLE FOR ANY
> + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
> + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
> + * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include "config.h"
> +
> +#if ENABLE(WEB_AUDIO)
> +
> +#include "AudioBus.h"
> +
> +#include "Accelerate.h"
> +#include <assert.h>
> +#include <math.h>
> +#include <wtf/OwnPtr.h>
> +#include <wtf/PassOwnPtr.h>
> +
> +namespace WebCore {
> +
> +AudioBus::AudioBus(unsigned numberOfChannels, size_t length, bool allocate)
> +    : m_length(length)
> +    , m_busGain(1.0)
> +    , m_isFirstTime(true)
> +    , m_sampleRate(0.0)
> +{
> +    m_channels.reserveInitialCapacity(numberOfChannels);
> +
> +    for (unsigned i = 0; i < numberOfChannels; ++i) {
> +        PassOwnPtr<AudioChannel> channel = allocate ? adoptPtr(new AudioChannel(length)) : adoptPtr(new AudioChannel(0, length));
> +        m_channels.append(channel);
> +    }
> +
> +    m_layout = LayoutCanonical; // for now this is the only layout we define
> +}
> +
> +void AudioBus::setChannelMemory(unsigned channelIndex, float* storage, size_t length)
> +{
> +    if (channelIndex < m_channels.size()) {
> +        channel(channelIndex)->set(storage, length);
> +        m_length = length;

This still looks fishy to me; only one of the channels is having its storage set, but the overall length of the AudioBus is being updated.

> +    }
> +}
> +
> +void AudioBus::zero()
> +{
> +    for (unsigned i = 0; i < m_channels.size(); ++i)
> +        m_channels[i]->zero();
> +}
> +
> +AudioChannel* AudioBus::channelByType(unsigned channelType)
> +{
> +    // For now we only support canonical channel layouts...
> +    if (m_layout != LayoutCanonical)
> +        return 0;
> +
> +    switch (numberOfChannels()) {
> +    case 1: // mono
> +        if (channelType == ChannelMono || channelType == ChannelLeft)
> +            return channel(0);
> +        return 0;
> +
> +    case 2: // stereo
> +        switch (channelType) {
> +        case ChannelLeft: return channel(0);
> +        case ChannelRight: return channel(1);
> +        default: return 0;
> +        }
> +
> +    case 4: // quad
> +        switch (channelType) {
> +        case ChannelLeft: return channel(0);
> +        case ChannelRight: return channel(1);
> +        case ChannelSurroundLeft: return channel(2);
> +        case ChannelSurroundRight: return channel(3);
> +        default: return 0;
> +        }
> +
> +    case 5: // 5.0
> +        switch (channelType) {
> +        case ChannelLeft: return channel(0);
> +        case ChannelRight: return channel(1);
> +        case ChannelCenter: return channel(2);
> +        case ChannelSurroundLeft: return channel(3);
> +        case ChannelSurroundRight: return channel(4);
> +        default: return 0;
> +        }
> +
> +    case 6: // 5.1
> +        switch (channelType) {
> +        case ChannelLeft: return channel(0);
> +        case ChannelRight: return channel(1);
> +        case ChannelCenter: return channel(2);
> +        case ChannelLFE: return channel(3);
> +        case ChannelSurroundLeft: return channel(4);
> +        case ChannelSurroundRight: return channel(5);
> +        default: return 0;
> +        }
> +    }
> +    
> +    ASSERT_NOT_REACHED();
> +    return 0;
> +}
> +
> +// Returns true if the channel count and frame-size match.
> +bool AudioBus::topologyMatches(const AudioBus& bus) const
> +{
> +    if (numberOfChannels() != bus.numberOfChannels())
> +        return false; // channel mismatch
> +
> +    // Make sure source bus has enough frames.
> +    if (length() > bus.length())
> +        return false; // frame-size mismatch
> +
> +    return true;
> +}
> +
> +PassOwnPtr<AudioBus> AudioBus::createBufferFromRange(AudioBus* sourceBuffer, unsigned startFrame, unsigned endFrame)
> +{
> +    size_t numberOfSourceFrames = sourceBuffer->length();
> +    unsigned numberOfChannels = sourceBuffer->numberOfChannels();
> +
> +    // Sanity checking
> +    bool isRangeSafe = startFrame < endFrame && endFrame <= numberOfSourceFrames;
> +    ASSERT(isRangeSafe);
> +    if (!isRangeSafe)
> +        return 0;
> +
> +    size_t rangeLength = endFrame - startFrame;
> +
> +    PassOwnPtr<AudioBus> audioBus = adoptPtr(new AudioBus(numberOfChannels, rangeLength));

If this were a PassRefPtr, holding on to it as one instead of as a RefPtr would be incorrect. To match the preferred style, use OwnPtr<AudioBus> here; see below for the return value.

> +    audioBus->setSampleRate(sourceBuffer->sampleRate());
> +
> +    for (unsigned i = 0; i < numberOfChannels; ++i)
> +        audioBus->channel(i)->copyFromRange(sourceBuffer->channel(i), startFrame, endFrame);
> +
> +    return audioBus;

Here, use "return audioBus.release()".

> +}
> +
> +float AudioBus::maxAbsValue() const
> +{
> +    float max = 0.0f;
> +    for (unsigned i = 0; i < numberOfChannels(); ++i) {
> +        AudioChannel* channel = const_cast<AudioBus*>(this)->channel(i);

Shouldn't you be able to write "const AudioChannel* channel = ..." now, and get rid of the const_cast?

> +        float channelMax = channel->maxAbsValue();
> +        if (channelMax > max)
> +            max = channelMax;

Could just write "max = std::max(max, channel->maxAbsValue()".

> +    }
> +
> +    return max;
> +}
> +
> +void AudioBus::normalize()
> +{
> +    float max = maxAbsValue();
> +    if (max)
> +        scale(1.0f / max);
> +}
> +
> +void AudioBus::scale(double scale)
> +{
> +    for (unsigned i = 0; i < numberOfChannels(); ++i)
> +        channel(i)->scale(scale);
> +}
> +
> +// Just copies the samples from the source bus to this one.
> +// This is just a simple copy if the number of channels match, otherwise a mixup or mixdown is done.
> +// For now, we just support a mixup from mono -> stereo.
> +void AudioBus::copyFrom(const AudioBus& sourceBus)
> +{
> +    if (&sourceBus == this)
> +        return;
> +
> +    if (numberOfChannels() == sourceBus.numberOfChannels()) {
> +        for (unsigned i = 0; i < numberOfChannels(); ++i)
> +            channel(i)->copyFrom(sourceBus.channel(i));
> +    } else if (numberOfChannels() == 2 && sourceBus.numberOfChannels() == 1) {
> +        // Handle mono -> stereo case (for now simply copy mono channel into both left and right)
> +        // FIXME: Really we should apply an equal-power scaling factor here, since we're effectively panning center...
> +        const AudioChannel* sourceChannel = sourceBus.channel(0);
> +        channel(0)->copyFrom(sourceChannel);
> +        channel(1)->copyFrom(sourceChannel);
> +    } else {
> +        // Case not handled
> +        ASSERT_NOT_REACHED();
> +    }
> +}
> +
> +void AudioBus::sumFrom(const AudioBus &sourceBus)
> +{
> +    if (numberOfChannels() == sourceBus.numberOfChannels()) {
> +        for (unsigned i = 0; i < numberOfChannels(); ++i)
> +            channel(i)->sumFrom(sourceBus.channel(i));
> +    } else if (numberOfChannels() == 2 && sourceBus.numberOfChannels() == 1) {
> +        // Handle mono -> stereo case (for now simply sum mono channel into both left and right)
> +        // FIXME: Really we should apply an equal-power scaling factor here, since we're effectively panning center...
> +        const AudioChannel* sourceChannel = sourceBus.channel(0);
> +        channel(0)->sumFrom(sourceChannel);
> +        channel(1)->sumFrom(sourceChannel);
> +    } else {
> +        // Case not handled
> +        ASSERT_NOT_REACHED();
> +    }
> +}
> +
> +void AudioBus::processWithGainFromMonoStereo(const AudioBus &sourceBus, double* lastMixGain, double targetGain, bool sumToBus)
> +{
> +    // We don't want to suddenly change the gain from mixing one time slice to the next,
> +    // so we "de-zipper" by slowly changing the gain each sample-frame until we've achieved the target gain.
> +
> +    // FIXME: optimize this method (SSE, etc.)
> +    // FIXME: Need fast path here when gain has converged on targetGain. In this case, de-zippering is no longer needed.
> +    // FIXME: Need fast path when this==sourceBus && lastMixGain==targetGain==1.0 && sumToBus==false (this is a NOP)
> +
> +    // Take master bus gain into account as well as the targetGain.
> +    double totalDesiredGain = m_busGain * targetGain;
> +
> +    // First time, snap directly to totalDesiredGain.
> +    double gain = m_isFirstTime ? totalDesiredGain : *lastMixGain;
> +    m_isFirstTime = false;
> +
> +    int numberOfSourceChannels = sourceBus.numberOfChannels();
> +    int numberOfDestinationChannels = numberOfChannels();
> +
> +    AudioBus& sourceBusSafe = const_cast<AudioBus&>(sourceBus);
> +    const float* sourceL = sourceBusSafe.channelByType(ChannelLeft)->data();
> +    const float* sourceR = numberOfSourceChannels > 1 ? sourceBusSafe.channelByType(ChannelRight)->data() : 0;
> +
> +    float* destinationL = channelByType(ChannelLeft)->data();
> +    float* destinationR = numberOfDestinationChannels > 1 ? channelByType(ChannelRight)->data() : 0;
> +
> +    const double DezipperRate = 0.005;
> +    int framesToProcess = length();
> +
> +    if (sumToBus) {
> +        // Sum to our bus
> +        if (sourceR && destinationR) {
> +            // Stereo
> +            while (framesToProcess--) {
> +                float sampleL = *sourceL++;
> +                float sampleR = *sourceR++;
> +                *destinationL++ += static_cast<float>(gain * sampleL);
> +                *destinationR++ += static_cast<float>(gain * sampleR);
> +
> +                // Slowly change gain to desired gain.
> +                gain += (totalDesiredGain - gain) * DezipperRate;
> +            }
> +        } else if (destinationR) {
> +            // Mono -> stereo (mix equally into L and R)
> +            // FIXME: Really we should apply an equal-power scaling factor here, since we're effectively panning center...
> +            while (framesToProcess--) {
> +                float sample = *sourceL++;
> +                *destinationL++ += static_cast<float>(gain * sample);
> +                *destinationR++ += static_cast<float>(gain * sample);
> +
> +                // Slowly change gain to desired gain.
> +                gain += (totalDesiredGain - gain) * DezipperRate;
> +            }
> +        } else {
> +            // Mono
> +            while (framesToProcess--) {
> +                float sampleL = *sourceL++;
> +                *destinationL++ += static_cast<float>(gain * sampleL);
> +
> +                // Slowly change gain to desired gain.
> +                gain += (totalDesiredGain - gain) * DezipperRate;
> +            }
> +        }
> +    } else {
> +        // Process directly (without summing) to our bus
> +        if (sourceR && destinationR) {
> +            // Stereo
> +            while (framesToProcess--) {
> +                float sampleL = *sourceL++;
> +                float sampleR = *sourceR++;
> +                *destinationL++ = static_cast<float>(gain * sampleL);
> +                *destinationR++ = static_cast<float>(gain * sampleR);
> +
> +                // Slowly change gain to desired gain.
> +                gain += (totalDesiredGain - gain) * DezipperRate;
> +            }
> +        } else if (destinationR) {
> +            // Mono -> stereo (mix equally into L and R)
> +            // FIXME: Really we should apply an equal-power scaling factor here, since we're effectively panning center...
> +            while (framesToProcess--) {
> +                float sample = *sourceL++;
> +                *destinationL++ = static_cast<float>(gain * sample);
> +                *destinationR++ = static_cast<float>(gain * sample);
> +
> +                // Slowly change gain to desired gain.
> +                gain += (totalDesiredGain - gain) * DezipperRate;
> +            }
> +        } else {
> +            // Mono
> +            while (framesToProcess--) {
> +                float sampleL = *sourceL++;
> +                *destinationL++ = static_cast<float>(gain * sampleL);
> +
> +                // Slowly change gain to desired gain.
> +                gain += (totalDesiredGain - gain) * DezipperRate;
> +            }
> +        }
> +    }
> +
> +    // Save the target gain as the starting point for next time around.
> +    *lastMixGain = gain;
> +}
> +
> +void AudioBus::processWithGainFrom(const AudioBus &sourceBus, double* lastMixGain, double targetGain, bool sumToBus)
> +{
> +    // Make sure we're summing from same type of bus.
> +    // We *are* able to sum from mono -> stereo
> +    if (sourceBus.numberOfChannels() != 1 && !topologyMatches(sourceBus))
> +        return;
> +
> +    // Dispatch for different channel layouts
> +    switch (numberOfChannels()) {
> +    case 1: // mono
> +    case 2: // stereo
> +        processWithGainFromMonoStereo(sourceBus, lastMixGain, targetGain, sumToBus);
> +        break;
> +    case 4: // FIXME: implement quad
> +    case 5: // FIXME: implement 5.0
> +    default:
> +        ASSERT_NOT_REACHED();
> +        break;
> +    }
> +}
> +
> +void AudioBus::copyWithGainFrom(const AudioBus &sourceBus, double* lastMixGain, double targetGain)
> +{
> +    processWithGainFrom(sourceBus, lastMixGain, targetGain, false);
> +}
> +
> +void AudioBus::sumWithGainFrom(const AudioBus &sourceBus, double* lastMixGain, double targetGain)
> +{
> +    processWithGainFrom(sourceBus, lastMixGain, targetGain, true);
> +}
> +
> +} // WebCore
> +
> +#endif // ENABLE(WEB_AUDIO)
> Index: WebCore/platform/audio/AudioBus.h
> ===================================================================
> --- WebCore/platform/audio/AudioBus.h	(revision 0)
> +++ WebCore/platform/audio/AudioBus.h	(revision 0)
> @@ -0,0 +1,139 @@
> +/*
> + * Copyright (C) 2010 Google Inc. All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * 1.  Redistributions of source code must retain the above copyright
> + *     notice, this list of conditions and the following disclaimer.
> + * 2.  Redistributions in binary form must reproduce the above copyright
> + *     notice, this list of conditions and the following disclaimer in the
> + *     documentation and/or other materials provided with the distribution.
> + * 3.  Neither the name of Apple Computer, Inc. ("Apple") nor the names of
> + *     its contributors may be used to endorse or promote products derived
> + *     from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY APPLE AND ITS CONTRIBUTORS "AS IS" AND ANY
> + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
> + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
> + * DISCLAIMED. IN NO EVENT SHALL APPLE OR ITS CONTRIBUTORS BE LIABLE FOR ANY
> + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
> + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
> + * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef AudioBus_h
> +#define AudioBus_h
> +
> +#include "AudioChannel.h"
> +#include <wtf/Noncopyable.h>
> +#include <wtf/PassOwnPtr.h>
> +#include <wtf/Vector.h>
> +
> +namespace WebCore {
> +
> +// An AudioBus represents a collection of one or more AudioChannels.
> +// The data layout is "planar" as opposed to "interleaved".
> +// An AudioBus with one channel is mono, an AudioBus with two channels is stereo, etc.
> +class AudioBus  : public Noncopyable {
> +public:
> +    enum {
> +        ChannelLeft = 0,
> +        ChannelRight = 1,
> +        ChannelCenter = 2, // center and mono are the same
> +        ChannelMono = 2,
> +        ChannelLFE = 3,
> +        ChannelSurroundLeft = 4,
> +        ChannelSurroundRight = 5,
> +    };
> +
> +    enum {
> +        LayoutCanonical = 0
> +        // Can define non-standard layouts here
> +    };
> +
> +    // allocate indicates whether or not to initially have the AudioChannels created with managed storage.
> +    // Normal usage is to pass true here, in which case the AudioChannels will memory-manage their own storage.
> +    // If allocate is false then setChannelMemory() has to be called later on for each channel before the AudioBus is useable...
> +    AudioBus(unsigned numberOfChannels, size_t length, bool allocate = true);
> +
> +    // Tells the given channel to use an externally allocated buffer.
> +    void setChannelMemory(unsigned channelIndex, float* storage, size_t length);
> +
> +    // Channels
> +    unsigned numberOfChannels() const { return m_channels.size(); }
> +
> +    AudioChannel* channel(unsigned channel) { return m_channels[channel].get(); }
> +    const AudioChannel* channel(unsigned channel) const { return const_cast<AudioBus*>(this)->m_channels[channel].get(); }
> +    AudioChannel* channelByType(unsigned type);
> +
> +    // Number of sample-frames
> +    size_t length() const { return m_length; }
> +
> +    // Sample-rate : 0.0 if unknown or "don't care"
> +    double sampleRate() const { return m_sampleRate; }
> +    void setSampleRate(double sampleRate) { m_sampleRate = sampleRate; }
> +
> +    // Zeroes all channels.
> +    void zero();
> +
> +    // Returns true if the channel count and frame-size match.
> +    bool topologyMatches(const AudioBus &sourceBus) const;
> +
> +    // Creates a new buffer from a range in the source buffer.
> +    // 0 may be returned if the range does not fit in the sourceBuffer
> +    static PassOwnPtr<AudioBus> createBufferFromRange(AudioBus* sourceBuffer, unsigned startFrame, unsigned endFrame);
> +
> +    // Scales all samples by the same amount.
> +    void scale(double scale);
> +
> +    // Master gain for this bus - used with sumWithGainFrom() below
> +    void setGain(double gain) { m_busGain = gain; }
> +    double gain() { return m_busGain; }
> +
> +    void reset() { m_isFirstTime = true; } // for de-zippering
> +
> +    // Assuming sourceBus has the same topology, copies sample data from each channel of sourceBus to our corresponding channel.
> +    void copyFrom(const AudioBus &sourceBus);
> +
> +    // Sums the sourceBus into our bus with unity gain.
> +    // Our own internal gain m_busGain is ignored.
> +    void sumFrom(const AudioBus &sourceBus);
> +
> +    // Copy or sum each channel from sourceBus into our corresponding channel.
> +    // We scale by targetGain (and our own internal gain m_busGain), performing "de-zippering" to smoothly change from *lastMixGain to (targetGain*m_busGain).
> +    // The caller is responsible for setting up lastMixGain to point to storage which is unique for every "stream" which will be summed to this bus.
> +    // This represents the dezippering memory.
> +    void copyWithGainFrom(const AudioBus &sourceBus, double* lastMixGain, double targetGain);
> +    void sumWithGainFrom(const AudioBus &sourceBus, double* lastMixGain, double targetGain);
> +
> +    // Returns maximum absolute value across all channels (useful for normalization).
> +    float maxAbsValue() const;
> +
> +    // Makes maximum absolute value == 1.0 (if possible).
> +    void normalize();
> +
> +protected:
> +    AudioBus() { };
> +
> +    void processWithGainFrom(const AudioBus &sourceBus, double* lastMixGain, double targetGain, bool sumToBus);
> +    void processWithGainFromMonoStereo(const AudioBus &sourceBus, double* lastMixGain, double targetGain, bool sumToBus);
> +
> +    size_t m_length;
> +
> +    Vector<OwnPtr<AudioChannel> > m_channels;
> +
> +    int m_layout;
> +
> +    double m_busGain;
> +    bool m_isFirstTime;
> +    double m_sampleRate; // 0.0 if unknown or N/A
> +};
> +
> +} // WebCore
> +
> +#endif // AudioBus_h
> Index: WebCore/platform/audio/AudioSourceProvider.h
> ===================================================================
> --- WebCore/platform/audio/AudioSourceProvider.h	(revision 0)
> +++ WebCore/platform/audio/AudioSourceProvider.h	(revision 0)
> @@ -0,0 +1,46 @@
> +/*
> + * Copyright (C) 2010 Google Inc. All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * 1.  Redistributions of source code must retain the above copyright
> + *     notice, this list of conditions and the following disclaimer.
> + * 2.  Redistributions in binary form must reproduce the above copyright
> + *     notice, this list of conditions and the following disclaimer in the
> + *     documentation and/or other materials provided with the distribution.
> + * 3.  Neither the name of Apple Computer, Inc. ("Apple") nor the names of
> + *     its contributors may be used to endorse or promote products derived
> + *     from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY APPLE AND ITS CONTRIBUTORS "AS IS" AND ANY
> + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
> + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
> + * DISCLAIMED. IN NO EVENT SHALL APPLE OR ITS CONTRIBUTORS BE LIABLE FOR ANY
> + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
> + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
> + * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef AudioSourceProvider_h
> +#define AudioSourceProvider_h
> +
> +namespace WebCore {
> +
> +class AudioBus;
> +    
> +// Abstract base-class for a pull-model client.
> +// provideInput() gets called repeatedly to render time-slices of a continuous audio stream.
> +class AudioSourceProvider {
> +public:
> +    virtual void provideInput(AudioBus* bus, size_t framesToProcess) = 0;
> +    virtual ~AudioSourceProvider() { }
> +};
> +
> +} // WebCore
> +
> +#endif // AudioSourceProvider_h
Comment 19 Chris Rogers 2010-09-01 12:07:13 PDT
Created attachment 66242 [details]
Patch
Comment 20 Chris Rogers 2010-09-01 12:08:13 PDT
Hi Ken, I addressed your comments.  You previously approved this - thanks.
Comment 21 Kenneth Russell 2010-09-02 16:06:21 PDT
Comment on attachment 66242 [details]
Patch

Looks good; r=me again.
Comment 22 WebKit Commit Bot 2010-09-02 18:01:17 PDT
Comment on attachment 66242 [details]
Patch

Rejecting patch 66242 from commit-queue.

crogers@google.com does not have committer permissions according to http://trac.webkit.org/browser/trunk/WebKitTools/Scripts/webkitpy/common/config/committers.py.

- If you do not have committer rights please read http://webkit.org/coding/contributing.html for instructions on how to use bugzilla flags.

- If you have committer rights please correct the error in WebKitTools/Scripts/webkitpy/common/config/committers.py by adding yourself to the file (no review needed).  The commit-queue restarts itself every 2 hours.  After restart the commit-queue will correctly respect your committer rights.
Comment 23 Chris Rogers 2010-09-03 12:10:24 PDT
Comment on attachment 66242 [details]
Patch

Clearing flags on attachment: 66242

Committed r66755: <http://trac.webkit.org/changeset/66755>
Comment 24 Chris Rogers 2010-09-03 12:10:30 PDT
All reviewed patches have been landed.  Closing bug.