Logo der Uni Stuttgart
1. Quantization Basics

Continuous-valued signals can take any real value either in the entire range of real numbers or in a range limited by some system constraints. In either of the two cases, an uncountably infinite set of values is required to represent the signal values. An example is the real-valued voltage at the output of an analog microphone and at the input of a analog to digital converter (ADC).

If a signal has to be processed or stored digitally, each of its values must be representable by a finite number of bits. Thus, all values together have to form a finite countable set. A signal consisting only of such discrete values is said to be quantized. The possible signal values are called quantization steps. The process of transformation of a continuous-valued signal into a discrete-valued one is called quantization and is avisualized and analyzed in this webdemo.

To be consistent with the webdemo implementation, discrete-time (sampled) signals will be used for describing the quantization process without loss of generality. For details concerning the transformation of continuous-time signals to discrete-time ones, please refer to the webdemo Sampling of analog signals.

Another case when a (re-)quantization is required is when a signal that needs to be represeted digitally is originally described by a mathematical expression like for example $g(t) = \sin(t)/t$ or, going further by a superposition of several such functions which is the case for pulse-shaped digital data transmission. Theoretically, the result of such functions is real-valued; when designing a signal in a software, this large range of values is approximated by floating-point represenation and quantization is required to bring the signal into a discrete-valued fixed-point or integer form that can be played over a digital to analog converter (DAC).

1.1 Uniform equalization

The uniform quantization is modelled by dividing a limited available range of real input values $\left[ -x_\text{max}\ldots x_\text{max}\right]$ into $S$ intervals (quantization steps), each with the size $\Delta q=2x_\text{max}/S$. In the process of quantization, all the values of each interval are mapped to one value within that interval such that all quantized values are equidistant with the spacing $\Delta q$. The position of the quantized value whithin a quantization interval is often chosen to be in its middle on on an edge between two adjacent intervals. Since the reason for the quantization is the digital representation of a signal, the total number of steps is selected to $S = 2^{N_\text{Q}}$ in order to optimally utilize $N_\text{Q}$ bits.

1.2 Quantization error

The (additive) quantization error is a measure to evaluate the performance of a quantizer. It is defined as the difference between any continuous-valued sample at the input of the quantizer and the discrete-valued sample at its output: $$ e_{\text{Q}, k} = x_{\text{Q}, k} - x_k $$ such that the quantized signal sample $x_{\text{Q}, k}$ can be seen as a superposition of the input signal sample $x_k$ and the error sample $e_{\text{Q}, k}$. Since the output values have a maximum spacing of $\Delta q$, it can be shown that generally $$ \left|e_{\text{Q}, k}\right| \leq \Delta q. $$

1.3 Quantization noise

The signal samples $x_k$ are often assumed to be instances of a random variable which means they form a stationary random process $X_k$. Then also $x_{\text{Q}, k}$ and $e_{\text{Q}, k}$ form random processes $X_{\text{Q}, k}$ and $E_{\text{Q}, k}$, respectively. The random process $E_{\text{Q}, k}$ is called the quantization noise.

For explaining the statistic properties of quantization, we assume that $X_k$ is uniformly distributed in the range $\left[ -x_\text{max}\ldots x_\text{max}\right]$. The quantization noise is then also uniformly distributed. With an appropriate design of the quantizer, which is covered by the following description slides, it is optimal to make the noise symmetric around zero such that $$ -\frac{\Delta q}{2} \leq e_{\text{Q}, k} \leq \frac{\Delta q}{2}. $$ The noise power can then be calculated from the probability density function (PDF) to $P_{E}=\frac{\Delta q^2}{12}$. The input signal power is calculated in the same manner to $P_{E}=\frac{x_\text{max}^2}{3}$. Given that $\Delta q$ is connected to $x_\text{max}$ via $N_\text{Q}$, the signal-to-quantization-noise-ratio (SQNR) in linear scale is $$ \gamma_\text{Q} = \frac{4x_\text{max}^2}{\Delta q^2} = \left(2^{N_\text{Q}}\right)^2. $$ In logarithmic scale (dB), we can calculate $$ \left. \gamma_\text{Q}\right|_\text{dB} = N_\text{Q} \cdot 10\log_4 \approx N_\text{Q} \cdot 6.02\,\text{dB} . $$

...

See also lecture notes of ÜT 1.