In computer communications, discrete data are first channel coded and then modulated into continuous signals for transmission and reception. In a hard detection setting, only demodulated data are provided to… Click to show full abstract
In computer communications, discrete data are first channel coded and then modulated into continuous signals for transmission and reception. In a hard detection setting, only demodulated data are provided to the decoder. If soft information on received signal quality is provided, its use can improve decoding accuracy. Incorporating it, however, typically comes at the expense of increased algorithmic complexity. Here we introduce a mechanism to use binarized soft information in the Guessing Random Additive Noise Decoding framework such that decoding accuracy is increased, but computational complexity is decreased. The principle envisages a code-book-independent quantization of soft information where demodulated symbols are additionally indicated to be reliable or unreliable. We introduce two algorithms that incorporate this information, one of which identifies a Maximum Likelihood (ML) decoding and the other either reports an ML decoding or an error. Both are suitable for use with any block-code, and are capacity-achieving. We determine error exponents and asymptotic complexity. They achieve higher rates with lower error probabilities and less algorithmic complexity than their hard detection counterparts. As practical illustrations, we compare performance with majority logic decoding of a Reed-Muller code, with Berlekamp-Massey decoding of a BCH code, and establish performance of Random Linear Codes.
               
Click one of the above tabs to view related content.