Turbo codes

1. Introduction

At the 1993 IEEE International Conference on Communications in Geneva two French scientists Claude Berrou and Alain Glavieux from ENST-Bretagne in Brest presented a digital coding scheme that could provide virtually error-free communications at data rates and transmitting-power efficiencies well beyond what most experts thought possible. The new error correction coding scheme, named turbo codes, has revolutionized error-correction coding. The new scheme let engineers design systems that are very close to the so-called channel capacity – the maximum rate, in bit per second, of a communications channel for a given power level at the transmitter. This threshold was discovered by the Claude Shannon in 1948.
Turbo codes are iteratively decoded parallel concatenated convolutional codes which consist of two convolutinal encoders, the first of which encodes the information bits directly, while the second one encodes the information bits following interleaving. Maximum likelihood decoding (Viterbi algorithm) of these codes is prohibitively complex because they are equivalent to a time-varying convolutional codes with very high number of states (e.g. about 21000 states). The key to solving the decoding complexity of these concatenated codes is the existence of sub-optimal decoding algorithm which achieves performance very close to that of a maximum likelihood decoder.
In 1995 the serial analogues of turbo codes were introduced. The serially concatenated convolutional codes are constructed from the same constituent codes and interleaver elements as turbo codes, but are concatenated in a serial fashion. Serial schemes achieve comparable performance to parallel schemes.