Fundamentals of Audio and Video Programming for Games (Pro-Developer)
The classes and program design described previously are tailored towards editing parameters of special effects using a UI, and demonstrating each one of the special effects.
DirectSound does not limit the number of special effects that can be applied to a sound buffer. For example, it is typical for a professional audio-editing system to have three or even five parametric equalizers. There may also be situations when you might want multiple chorus, echo or compressor effects. Any number of these effects can be applied to a DirectSound buffer by filling out a DSEFFECTDESC structure, and then making calls to SetFX and GetObjectInPath.
The same is true for environmental effects “ more than one environmental reverb effect can be applied to one sound buffer, although clearly there are diminishing returns in applying more and more effects to a single sound.
We are not crazy about the implementation of I3DL2 in DirectSound. As only the wet reverb sound is the output, you might need to add a second buffer playing the dry sound. Apart from the performance issue of having two buffers, the DirectSound SDK does not provide any means of synchronizing buffers, so you cannot be sure that tiny reverb delays are going to be distinguishable (not to mention unwanted phase effects). In fact, you cannot even be sure that the two buffers will have the same latency each time that the application is run. See Chapter 7 for the recommended technique for applying environmental effects.