Sampling at anything significantly higher than 40 kHz is termed oversampling. In just a few years time, we have seen the audio industry go from the CD system standard of With sampling frequencies this high, aliasing is no longer an issue. So audio signals can be changed into digital words digitized without loss of information, and with no aliasing effects, as long as the sampling frequency is high enough.
How is this done?
Quantizing is the process of determining which of the possible values determined by the number of bits or voltage reference parts is the closest value to the current sample -- i. Quantizing, by definition then, involves deciding between two values and thus always introduces error. How big the error, or how accurate the answer, depends on the number of bits. The more bits, the better the answer. The converter has a reference voltage which is divided up into 2 n parts, where n is the number of bits.
Each part represents the same value. Since you cannot resolve anything smaller than this value, there is error. There is always error in the conversion process. This is the accuracy issue. Figure 3. Since the signal swings positive and negative there are levels for each direction. Hence, an 8-bit system cannot resolve any change smaller than 39 mV. This means a worst case accuracy error of 0. Table 1 compares the accuracy improvement gained by bit, bit and bit systems along with the reduction in error.
Note: this is not the only way to use the reference voltage. Many schemes exist for coding, but this one nicely illustrates the principles involved. Each step size resulting from dividing the reference into the number of equal parts dictated by the number of bits is equal and is called a quantizing step also called quantizing interval -- see Fig.
Originally this step was termed the LSB least significant bit since it equals the value of the smallest coded bit, however it is an illogical choice for mathematical treatments and has since be replaced by the more accurate term quantizing step. Figure 4.
Quantization -- 3-Bit, 5V Example The error due to the quantizing process is called quantizing error no definitional stretch here. As shown earlier, each time a sample is taken there is error. Here's the not obvious part: the quantizing error can be thought of as an unwanted signal which the quantizing process adds to the perfect original. An example best illustrates this principle. Let the sampled input value be some arbitrarily chosen value, say, 2 volts. And let this be a 3-bit system with a 5 volt reference.
For the 2 volt input example, the converter must choose between either 1. This results in a quantizing error of If the input signal had been, say, 2. These alternating unwanted signals added by quantizing form a quantized error waveform, that is a kind of additive broadband noise that is generally uncorrelated with the signal and is called quantizing noise. Since the quantizing error is essentially random i. This is not quite the same thing as thermal noise, but it is similar.
The energy of this added noise is equally spread over the band from dc to one-half the sampling rate. This is a most important point and will be returned to when we discuss delta-sigma converters and their use of extreme oversampling. Successive approximation is one of the earliest and most successful analog-to-digital conversion techniques.
Successive approximation paved the way for the delta-sigma techniques to follow. A comparator is an electronic block whose output is determined by comparing the values of its two inputs. If the positive input is larger than the negative input then the output swings positive, and if the negative input exceeds the positive input, the output swings negative.
Therefore if a reference voltage is connected to one input and an unknown input signal is applied to the other input, you now have a device that can compare and tell you which is larger. Thus a comparator gives you a "high output" which could be defined to be a "1" when the input signal exceeds the reference, or a "low output" which could be defined to be a "0" when it does not.
ii. Oversampling A/D Converters with Improved Signal. Transfer Functions. Bupesh Pandita. Doctor of Philosophy. Department of Electrical and Computer. This book describes techniques for designing complex, discrete-time ΔΣ ADCs with signal-transfer functions that significantly filter interfering signals. The book.
Figure 5A. Successive Approximation Example Figure 5B. The circuit evaluates each sample and creates a digital word representing the closest binary value. The process takes the same number of steps as bits available, i. The analog sample is successively compared to determine the digital code, beginning with the determination of the biggest most significant bit of the code. The description given in Daniel Sheingold's Analog-Digital Conversion Handbook see References offers the best analogy as to how successive approximation works.
The process is exactly analogous to a gold miner's assay scale, or a chemical balance as seen in Figure 5A. You compare the unknown sample against these known values by first placing the heaviest weight on the scale. If it tips the scale you remove it; if it does not you leave it and go to the next smaller value. If that value tips the scale you remove it, if it does not you leave it and go to the next lower value, and so on until you reach the smallest weight that tips the scale. When you get to the last weight, if it does not tip the scale, then you put the next highest weight back on, and that is your best answer.
The sum of all the weights on the scale represents the closest value you can resolve. In digital terms, we can analyze this example by saying that a "0" was assigned to each weight removed, and a "1" to each weight remaining -- in essence creating a digital word equivalent to the unknown sample, with the number of bits equaling the number of weights.
As stated earlier the successive approximation technique must repeat this cycle for each sample. Even with today's technology, this is a very time consuming process and is still limited to relatively slow sampling rates, but it did get us into the bit, The successive approximation method of data conversion is an example of pulse code modulation, or PCM.
Three elements are required: sampling, quantizing, and encoding into a fixed length digital word. The reverse process reconstructs the analog signal from the PCM code. The output of a PCM system is a series of digital words, where the word-size is determined by the available bits. For example the output is a series of 8-bit words, or bit words, or bit words, etc. Look at Fig.
In a typical PWM system, the analog input signal is applied to a comparator whose reference voltage is a triangle-shaped waveform whose repetition rate is the sampling frequency. This simple block forms what is called an analog modulator. Figure 6. Pulse Width Modulation PWM A simple way to understand the "modulation" process is to view the output with the input held steady at zero volts. As long as there is no input, the output is a steady square wave. As soon as the input is non-zero, the output becomes a pulse-width modulated waveform.
That is, when the non-zero input is compared against the triangular reference voltage, it varies the length of time the output is either high or low. For example, say there was a steady DC value applied to the input.
For all samples when the value of the triangle is less than the input value, the output stays low, and for all samples when it is greater than the input value, it changes state and remains high. Therefore, if the triangle starts higher than the input value, the output goes high; at the next sample period the triangle has increased in value but is still more than the input, so the output remains high; this continues until the triangle reaches its apex and starts down again; eventually the triangle voltage drops below the input value and the output drops low and stays there until the reference exceeds the input again.
The resulting pulse-width modulated output, when averaged over time, gives the exact input voltage. This is also the core principle of most Class-D switching power amplifiers. The analog input is converted into a variable pulse-width stream used to turn-on the output switching transistors. The analog output voltage is simply the average of the on-times of the positive and negative outputs.
Pretty amazing stuff from a simple comparator with a triangle waveform reference. Another way to look at this, is that this simple device actually codes a single bit of information , i. It waited patiently for the semiconductor industry to develop the technologies necessary to integrate analog and digital circuitry on the same chip. Today's very high-speed "mixed-signal" IC processing allows the total integration of all the circuit elements necessary to create delta-sigma data converters of awesome magnitude [ 5 ]. How the name came about is interesting.
Another way to look at the action of the comparator is that the 1-bit information tells the output voltage which direction to go based upon what the input signal is doing. It looks at the input and compares it against its last look sample to see if this new sample is bigger or smaller than the last one -- that is the information transfer: bigger or smaller, increasing or decreasing.
If it is bigger than it tells the output to keep increasing, and if it is smaller it tells the output to stop increasing and start decreasing. It merely reacts to the change. It is the oversampling rate and subsequent digital processing that separates this from plain delta modulation no sigma. Doing a bit sorry of math shows that the value of the added quantizing noise relative to a maximum full-scale input equals 6.
That is, since the converter must choose between the only two possibilities of maximum or minimum values, then the error can be as much as half of that. One attribute shines true above all others for delta-sigma converters and makes them a superior audio converter: simplicity. The simplicity of 1-bit technology makes the conversion process very fast, and very fast conversions allows use of extreme oversampling.
And extreme oversampling pushing the quantizing noise and aliasing artifacts way out to megawiggle-land, where it is easily dealt with by digital filters typically times oversampling is used, resulting in a sampling frequency on the order of 3 MHz. To get a better understanding of how oversampling reduces audible quantization noise, we need to think in terms of noise power. From physics you may remember that power is conserved -- i. With oversampling the quantization noise power is spread over a band that is as many times larger as is the rate of oversampling. Noise shaping helps reduce in-band noise even more.
Oversampling pushes out the noise, but it does so uniformly, that is, the spectrum is still flat. Noise shaping changes that. Using very clever complex algorithms and circuit tricks, noise shaping contours the noise so that it is reduced in the audible regions and increased in the inaudible regions. Conservation still holds, the total noise is the same, but the amount of noise present in the audio band is decreased while simultaneously increasing the noise out-of-band -- then the digital filter eliminates it.
Very slick. As shown in Fig. The analog modulator is the 1-bit converter discussed previously with the change of integrating the analog signal before performing the delta modulation. The integral of the analog signal is encoded rather than the change in the analog signal, as is the case for traditional delta modulation.
Oversampling and noise shaping pushes and contours all the bad stuff aliasing, quantizing noise, etc. The decimation circuit, or decimator , is the digital circuitry that generates the correct output word length of , , or bits, and restores the desired output sample frequency. It is a digital sample rate reduction filter and is sometimes termed downsampling as opposed to oversampling since it is here that the sample rate is returned from its times rate to the normal CD rate of Figure 8.
Just what is dither? Aside from being a funny sounding word, it is a wonderfully accurate choice for what is being done. The word "dither" comes from a 12th century English term meaning "to tremble. Which, if you think about it, is not a bad description of noise. Dither is one of life's many trade-offs. Here the trade-off is between noise and resolution. Believe it or not, we can introduce dither a form of noise and increase our ability to resolve very small values.
Values, in fact, smaller than our smallest bit Perhaps you can begin to grasp the concept by making an analogy between dither and anti-lock brakes. Okay, here's how this analogy works: With regular brakes, if you just stomp on them, you probably create an unsafe skid situation for the car Instead, if you rapidly tap the brakes, you control the stopping without skidding. We shall call this "dithering the brakes.
This fell in the stop- quencies, as its gain rolls off rapidly from a high dc band of, and so was again removed by, the decimator value, but to drop back to a first- or second-order low- . Delchamps and Robert M. Ref legal event code : EP. From the angular profile of the differential reflectivity it is possible to obtain measurements of the absolute values of the real and imaginary parts of the refractive index RI. Namespaces Article Talk. This final feature makes the design appealing for use with spatial light modulators, or other optical or electro-optical components. EPB1 en.
So by "tapping" on our analog signal, we can improve our ability to resolve it. By introducing noise, the converter rapidly switches between two quantization levels, rather than picking one or the other, when neither is really correct. Sonically, this comes out as noise, rather than a discrete level with error. Subjectively, what would have been perceived as distortion is now heard as noise. Lets look at this is more detail. The problem dither helps to solve is that of quantization error caused by the data converter being forced to choose one of two exact levels for each bit it resolves.
It cannot choose between levels, it must pick one or the other. With bit systems, the digitized waveform for high frequency, low signal levels looks very much like a steep staircase with few steps. It is used as a standard in data conversion applications at bit and higher resolutions and for sample rates from 5 MHz to MHz or more. The architecture reduces the number of comparators needed by deploying multiple low-resolution flash conversion stages cascaded together to form the pipe.
The residual signal is then amplified prior to moving onto the following stage for finer quantization. In pipeline conversion a sample-and-hold amplifier SHA is need to acquire the input signal and hold it to better than 0.
Once all sub-stages have a valid conversion result, a digital correction block constructs the final multi-bit result. However, beyond bits resolution, as the sampled signal moves through the pipeline, transferring the charge associated with a given signal demands high gain bandwidth to ensure stage settling times fall within the limits set by the high frequency signals that are being sampled.
To maintain linearity you need to calibrate and correct for the limits in component matching achievable with current process technologies and it is tough to migrate designs from one process to another. As operating voltages fall from one process generation to the next, the input signal headroom is compressed. Furthermore, designing switches with greatly reduced threshold voltages that work well in deep sub-micron processes gets harder. In anti-alias filter AAF design, steep attenuation characteristics are hard to achieve, tempting you to consider over-sampling the signal of interest.
Over-sampling stretches the Nyquist zone, lowering demands on filter roll-off but the trade-offs are increased system power and higher processing speeds demanded of the back-end DSP system. It comprises a summing node, integrator and comparator. The modulator compares the input signal against a voltage reference level fed back from the DAC. The comparator is clocked at the over-sampling frequency. The DAC switches between Vref to close the control loop. Quantization errors within the modulator limit dynamic range.
A closed loop modulator works as a high-pass filter to quantization noise and as a low-pass filter to the input signal. The effect of this is a further increase in dynamic range of 9 dB for each doubling of the sample rate. Additional integrators within the loop can increase the steepness of the noise characteristic to give you further dynamic range increases.
This FFT plot with 65k points of the modulator illustrates the noise power density per FFT bin relative to the input signal frequency. The simulation was driven with an input signal frequency of approximately 4. Note the characteristic of the out of band noise i. Having established a modulator system capable of achieving these low noise levels, the next stage is to apply filtering to eliminate out-of-band noise, and decimation to re-sample the data.
A digital filter must reject all signal components within the serial data stream that occur beyond the Nyquist bandwidth. Simplistically, two frequency selective filter structures can be implemented in the digital domain. FIRs are more widely used because they are simpler and have a linear phase response. IIR filter design is more complicated by virtue of the feedback included.
The potentially infinite response of the IIR filter means there is always a possibility for the filter to become unstable. In addition, group delay can become significant and have adverse effects on performance in some systems. Digital filtering allows for the data reduction or down-sampling necessary to provide output data at the originally intended sample rate.
Note that over-sampling provides large amounts of redundant data. The process of sample rate reduction is called decimation. Firstly, over-sampling spreads quantization noise. Secondly, noise shaping reduces the in-band noise at the expense of higher out-of-band noise.
Thirdly, digital filtering attenuates out-of-band noise and signal components. But the advantage of DT schemes is their relatively simple architecture, the way that increased sample rates produce dynamic range improvements, and their compatibility with VLSI CMOS processes. Moving to continuous time CT loop filters opens up new application possibilities including wide base-band sampling out to several tens of megahertz to under-sampling RF signals in band-pass designs. In discrete time, sampling an input signal requires that the signal be acquired at a precise moment in time.
For an accurate representation of the input signal to be acquired on a hold capacitor, it is necessary that the input stages settle to a finite level, dictated by the accuracy limits of the system, in a time period driven by the system sample rate needs. This settling time eats into the sample time period of the system. At higher resolutions this drives a need for very high gain bandwidth circuits within the acquisition signal path. In fact, the converter system must be designed with circuits that work with bandwidths many times that of the input signal. Discrete time circuits therefore have to burn excess power to process a given bandwidth.
Continuous time does not require the high gain bandwidth stages necessary to force rapid settling, so power in these stages is reduced. In addition, there is no down-mixing of noise, eliminating any additional spectral spurs in the base-band.