In Part 1 of this pair of articles, we ran SPICE noise simulations on a simple second order lowpass filter. We saw that there is something fundamental about the 'hold' that the filter's capacitor network has over the total output noise level. Scaling all the filter's resistors by a constant factor, to change the cutoff frequency of the filter without changing any of the capacitor values, leaves the total noise voltage unchanged. With practical amplifiers, the noise level is degraded from the ideal case, but it's still pretty straightforward to predict what they will be.
Can we make useful noise level predictions if our filters are implemented digitally? In modern electronic product design, one can often make a choice between analog and digital signal processing. With analog processing, the filtering and other signal manipulation is done before converting the signal to digital (if it's indeed ever converted). The digital model involves converting as early as possible in the signal chain, doing the processing in the digital domain, and then perhaps converting back to analog.
The device I spend most time solving people's problems with, Cypress's PSoC3, has op amps for constructing analog active filters, and also a fast digital filter engine that can implement a wide range of filters in the digital domain. To help make the choice, systems engineers need a reliable method for directly comparing the noise performance of analog and digital filtering approaches.
We're taught that “going digital” creates quantization noise, which is the per-sample error involved in fitting a value of arbitrarily high precision into a lower-resolution number system, usually an N-bit binary system with 2^N available states. For any real-world signal, this error is completely uncorrelated with the actual signal and therefore can be treated as random noise, whose value is uniformly distributed between -0.5 LSB and +0.5 LSB. Textbooks demonstrate both that this noise is 'white’, i.e. has a frequency-independent spectral density,