Soft-Decision Demodulation¶
Attention
Rework for RST
Unlike hard demodulation which seeks the most likely transmitted symbol for a given received sample, the goal of soft demodulation is to derive a probability metric for each bit for the received sample. When using the output of the demodulator in conjunction with forward error-correction coding, the soft bit information can improve the error detection and correction capabilities of most decoders, usually by about 1.5 dB. This soft bit information provides a clue to the decoder as to the confidence that each bit was received correctly. For turbo-product codes {cite:Berrou:1993} and low-density parity check (LDPC) codes {cite:Gallager:1962}, this soft bit information is nearly a requirement.
Before we continue, let us define some nomenclature:
\(M=2^m\) are the number of points in the constellation (constellation size).
\(m=\log_2(M)\) are the number of bits per symbol in the constellation (modulation depth).
\(s_k\) is the symbol at index \(k\) on the complex plane; \(k \in \{0,1,2,\ldots,M-1\}\).
\(\{b_0,b_1,\ldots,b_{m-1}\}\) is the encoded bit string of \(s_k\) and is simply the value of \(k\) in binary-coded decimal.
\(b_j\) is the bit at index \(j\); \(b_j \in \{0,1\}\) and \(j \in \{0,1,\ldots,m-1\}\).
\(\mathcal{S}_M = \{s_0,s_1,\ldots,s_{M-1}\}\) is the set of all symbols in the constellation where \(1/M \sum_k \|s_k\|_2^2 = 1\).
\(\mathcal{S}_{b_j=t}\) is the subset of \(\mathcal{S}_M\) where the bit at index \(j\) is equal to \(t \in \{0,1\}\).
For example, let the modulation scheme be the generic 4-PSK with the constellation map defined in [fig-modem-psk-4] which has \(m=2\), \(M=4\), and \(\mathcal{S}_M = \{s_0=1, s_1=j, s_2=-j, s_3=-1\}\). Subsets:
\(\mathcal{S}_{b_0=0} = \{s_0= 1, s_2=-j\}\) (right-most bit is
0
)\(\mathcal{S}_{b_0=1} = \{s_1= j, s_3=-1\}\) (right-most bit is
1
)\(\mathcal{S}_{b_1=0} = \{s_0= 1, s_1= j\}\) (left-most bit is
0
)\(\mathcal{S}_{b_1=1} = \{s_2=-j, s_3=-1\}\) (left-most bit is
1
)
A few key points:
\(\mathcal{S}_{b_j=0} \cap \mathcal{S}_{b_j=1} = \emptyset, \, \forall_j\).
\(\mathcal{S}_{b_j=0} \cup \mathcal{S}_{b_j=1} = \mathcal{S}_M, \, \forall_j\).
Let us represent the received signal at a sampling instant \(n\) as
where \(s\) is the transmitted symbol and \(w\) is a zero-mean complex Gauss random variable with a variance \(\sigma_n^2 = E\{w w^*\}\). Let the transmitted symbols be i.i.d. and drawn from a \(M\)-point constellation, each with \(m\) bits of information such that the symbols belong to a set of constellation points \(s_k \in \mathcal{S}_M\) and \(E\{s_k s_k^*\}=1\). Assuming perfect channel knowledge, timing, and carrier offset recovery, the log-likelihood ratio (LLR) of each bit \(b_j\) is shown to be {cite:LeGoff:1994(Eq. (8))} the ratio of the two conditional a posteriori probabilities of each bit having been transmitted, viz.
Assuming that the channel is memoryless the “observation” is simply the received sample \(r\) in ([eqn-modem-digital-soft-received_signal]) and does not depend on previous symbols; therefore \(P\left(b_j=t|\text{observation}\right) = P\left(b_j=t|r(n)\right)\) and \(t \in \{0,1\}\). Furthermore, by assuming that the transmitted symbols are equally probable and that the noise follows a Gauss distribution {cite:Qiang:2003} the LLR reduces to
As shown in {cite:Qiang:2003} a sub-optimal simplified LLR expression can be obtained by replacing the summations in ([eqn-modem-digital-soft-LLR]) with the single largest component of each: \(\ln \sum_j {e^{z_j}} \approx \max_j \ln (e^{z_j}) = \max_j z_j\). This approximation provides a tight bound as long as the sum is dominated by its largest component. The approximate LLR becomes
Conveniently, both the exponential and logarithm operations disappear; furthermore, the noise variance becomes a scaling factor and is only used to influence the reliability of the obtained LLR.
[fig-modem-demodsoft] depicts the soft bit demodulation
algorithm for a received 16-QAM signal point, corrupted by noise.
The received sample is \(r = -0.65 - j0.47\) which results in a hard
demodulation of 0001.
The subfigures depict each of the four bits in the symbol
\(\{b_0,b_1,b_2,b_3\}\)
for which the
soft bit output is given, and show the nearest symbol for which a
0
and a 1
at that particular bit index occurs.
For example, [fig-modem-demodsoft-b2] shows that
the nearest symbol containing a 0
at bit index 2
is \(s_1=``0001\) (the hard decision demodulation)
at \((-3 - j)/\sqrt{10}\)
while
the nearest symbol containing a 1
at bit index 2
is \(s_3=``0011\)
at \((-3 + j)/\sqrt{10}\).
Plugging \(s^-=s_1\) and \(s^+=s_3\) into
([eqn-modem-digital-soft-LLR_approx])
and evaluating for \(\sigma_n=0.2\) gives
\(\tilde{\Lambda}(b_2) = -7.43\).
Because this number is largely negative, it is very likely that the
transmitted bit \(b_2\) was 0
.
This can be verified by [fig-modem-demodsoft-b2] which shows
that the distance from \(r\)
to \(s^-\)
is much shorter than that of \(s^+\).
Conversely, [fig-modem-demodsoft-b1] shows that \(b_1\) cannot be demodulated with such certainty; the distances from \(r\) to each of \(s^+\) and \(s^-\) are about the same. This is reflected in the relatively small LLR value of \(\tilde{\Lambda}(b_1)=-0.28\) which suggests a high uncertainty in the demodulation of \(b_1\).
One major drawback of computing
([eqn-modem-digital-soft-LLR_approx]) is that finding the maximum
requires searching over all constellation points to find the one which
minimizes \(\|r-s_k\|\)
(where \(s_k \in \mathcal{S}_{b_j=t}\)) is particularly time-consuming.
To circumvent this, liquid-dsp
only searches over a subset
\(\mathcal{S}_k \subset \mathcal{S}_M\) nearest to the
hard-demodulated symbol
(\(\mathcal{S}_k\) will typically only have about four values).
This can be done quickly because the hard-demodulated symbol can be
found systematically for most modulation schemes
(e.g. for LIQUID_MODEM_QAM only \(\mathcal{O}(\log_2 M)\) comparisons are
needed to make a hard decision).
If no symbols are found within \(\mathcal{S}_k\) for a given bit value
such that \(\mathcal{S}_k \cap \mathcal{S}_{b_j=t} = \emptyset\)
then the magnitude of \(\Lambda(b_j)\) is sufficiently large
and contains little soft bit information;
that is
\(\tilde{\Lambda}(b_j) \gg 0\)
when
\(\mathcal{S}_k \cap \mathcal{S}_{b_j=0} = \emptyset\)
and
\(\tilde{\Lambda}(b_j) \ll 0\)
when
\(\mathcal{S}_k \cap \mathcal{S}_{b_j=1} = \emptyset\).
It is guaranteed that
\(\left( \mathcal{S}_k \cap \mathcal{S}_{b_j=0}\right) \cup \left( \mathcal{S}_k \cap \mathcal{S}_{b_j=1}\right) \neq \emptyset `
because :math:`s_k\)
must be in either \(\mathcal{S}_{b_j=0}\) or
\(\mathcal{S}_{b_j=1}\).
liquid-dsp
performs soft demodulation with the
modem_demodulate_soft(q,x,*symbol,*soft_bits) method.
This is the same as the regular demodulate method,
but also returns the “soft” bits in addition to an estimate of the
original symbol.
Soft bit information is stored in liquid-dsp
as type unsigned char
with a value of 255 representing a very likely ``1``,
and a value of 0
representing a very likely ``0``.
The erasure condition is 127.
Soft bit value |
Interpretation |
---|---|
|
Very likely |
\(\vdots\) |
\(\vdots\) |
|
Likely |
\(\vdots\) |
\(\vdots\) |
|
Erasure |
\(\vdots\) |
\(\vdots\) |
|
Likely |
\(\vdots\) |
\(\vdots\) |
|
Very likely |
The fec and packetizer objects can make use of this soft information to improve the probability of decoding a packet (see [section-fec-soft] and [section-framing-packetizer] for details).
Error Vector Magnitude¶
The error vector magnitude (EVM) of a demodulated symbol is simply the average magnitude of the error vector between the received sample before demodulation and the expected transmitted symbol, viz.
EVM is returned by many of the framing objects (see [section-framing]) because it gives a good indication of signal distortion as a result of noise, inter-symbol interference, etc. If the only channel impairment is noise (e.g. perfect symbol timing) then the SNR can be estimated as