Polar codes introduced by Arikan are the first type of codes achieving the symmetric capacity for arbitrary binary-input discrete memoryless channel (B-DMC) under low complexity encoding and low complexity successive cancellation (SC) decoding with order of $\mathcal{O} \left( N \cdot \log N \right) $, for infinite length codes.
Polar codes are founded based on several concepts: channel polarization, code construction, polar encoding which is a special case of the normal encoding process (i.e. more structural) and its decoding concept.
$\bf{\text{1. Channel Polarization}}$
This is the first phase of founding a polar code. Channel polarization, as obvious from how it is called, is the phase where $N$ distinct channels are synthesized such that: each of these channels is either completely noisy or completely noiseless (i.e. strictly valid for infinite codelength $N$). The measure of how much a channel is noisy, in the scope of polar codes, was first determined by the symmetric capacity or the Bhattacharyya parameter of the channel. Later, BER was used as a more common measure.
$\bf{\text{2. Code Construction}}$
After the channels are polarized comes the phase of code construction. The code construction phase is about selecting which channels to transmit the information bits at. In other words, constructing a polar code means using a vector of bit-channel indices that would be used to transmit information at. The rest of the bit-channels would have no data (i.e. frozen data). Several code construction algorithms exist that vary in complexity, precision, and BER performance. The most well-known of these algorithms are:
- Evolution of Z-parameters based code construction algorithm,
- Monte-Carlo simulations based construction algorithm,
- Density evolution- based code construction algorithm,
- Gaussian approximation- based algorithm, and
- Transition Probability Matrix (TPM)- based algorithm.
$\bf{\text{3. Polar Encoding}}$
Polar codes are considered to be a member of the coset linear block codes family, where the information bits are multiplied by a submatrix out of the traditional polar generator matrix, and the frozen bits are multiplied by the other submatrix. Polar encoding is characterized by its structural manner, in the sense that all the parameters are static, independent of the code rate. Different code rates corresponds to different number of information bits, while using the same generator matrix.
The systematic polar encoding is an extended version of the non-systematic polar encoding, where the codeword is first non-systematically encoded, bits at frozen bit-channels positions are reset to the values of the frozen bits, and last, again, non systematically encoded. This provides a structural way for systematic encoding that is not "so" different from the non-systematic case. Systematic encoding provides better performance in terms of BER than non-systematic encoding. However, both have the same BLER performance.
$\bf{\text{4. Various Polar Decoders}}$
$\bf{\text{4.1. Successive Cancellation (SC)}}$:
This decoder was the earliest proposed decoder by Arikan in [1], based on the concept of successively decoding bits, where each stage of bit decoding is based on previously decoded bits. It encounters inter-bit dependence because of its successive manner, and thus error propagation. It, as a standalone decoder for polar codes, is outperformed by most polar decoders, to our knowledge, in terms of BER performance. However, it enjoys potential for list decoding, also, because of its sequential hierarchical structure, as will be stated later. It was proved that polar codes achieves shannon capacity of any symmetric BI-DMC under SC decoding [1].
$\bf{\text{4.2 Successive Cancellation List (SCL)}}$:
This decoder was proposed in [3] as an extended version of SC- decoder where: instead of computing hard decisions for each bit successively, branch one SC-decoder into two parallel SC-decoders at each stage of decision where each branch has its path metric that is continuously updated for each path. It was shown in the same work that a list of size 32 is enough to almost achieve the ML bound.
$\bf{\text{4.3 Successive Cancellation List with Cyclic Redundancy Check code (SCL-CRC)}}$:
This decoder is a natural extension of SCL decoder, where a high-rate CRC code is appended to the polar code so that the correct codeword is selected among the candidate codewords from the final list of paths. It was observed that whenever a SCL-decoder fails, the correct codeword exists in the list. So the CRC was proposed as a validation check for each candidate CW in the list. SCL-CRC decoder is proved to be comparable to the state-of-the-art LDPC code (i.e. it outperforms it in most cases).
$\bf{\text{4.4 Belief Propagation (BP)}}$:
Unlike SC-based decoding techniques, BP decoding enjoys being free of inter-bit dependence, and thus free of error propagation. It does not encounter any intermediate hard decisions. It updates the LLR values iteratively through right-to-left and left-to-right iterations using the same update functions that were used in LDPC domain. For finite length codes, BP decoder outperforms SC decoder in terms of BER performance.