Upload
indameantime5655
View
72
Download
0
Tags:
Embed Size (px)
Citation preview
4.2-21
4.2.4 Orthogonal, Biorthogonal and Simplex Signals
• In PAM, QAM and PSK, we had only one basis function. For orthogonal,
biorthogonal and simplex signals, however, we use more than one
orthogonal basis function, so N-dimensional Examples: the Fourier basis;
time-translated pulses; the Walsh-Hadamard basis.
o We’ve become used to SER getting worse quickly as we add bits to the
symbol – but with these orthogonal signals it actually gets better.
o The drawback is bandwidth occupancy; the number of dimensions in
bandwidth W and symbol time Ts is
2 sN W T≈
o So we use these sets when the power budget is tight, but there’s plenty
of bandwidth.
Orthogonal Signals
• With orthogonal signals, we select only one of the orthogonal basis
functions for transmission:
The number of signals M equals the number of dimensions N.
4.2-22
• Examples of orthogonal signals are frequency-shift keying (FSK), pulse
position modulation (PPM), and choice of Walsh-Hadamard functions
(note that with Fourier basis, it’s FSK, not OFDM).
• Energy and distance:
o The signals are equidistant, as can be seen from the sketch or from
location
location
s
i j
s
E i
jE
⎡ ⎤⎢ ⎥
←⎢ ⎥⎢ ⎥− =⎢ ⎥
←⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦
s s
so 2 m2 2log ( ) 2 ,i j s b bE M E kE d− = = = = ∀s s in ,i j
4.2-23
• Effect of adding bits:
o For a fixed energy per bit, adding more bits increases the minimum
distance – strong contrast to PAM, QAM, PSK.
o But adding each bit doubles the number of signals M, which equals the
number of dimensions N – and that doubles the bandwidth!
• Error analysis for orthogonal signals. Equal energy, equiprobable signals,
so receiver is
o Error probability is same for all signals. If 1s was sent, then the
correlation vector is
1
2
s
M
E nn
n
⎡ ⎤+⎢ ⎥⎢=⎢⎢ ⎥⎢ ⎥⎣ ⎦
c ⎥⎥
with real components.
4.2-24
o Assume 1s was sent. The probability of a correct symbol decision,
conditioned on the value of the received , is 1c
( ) ( ) ( )1 2 1 3 1
1
1
0
( )
12
cs M
M
1P c P n c n c n c
cQN
−
⎡ ⎤= < ∧ < ∧ ∧ <⎣ ⎦
⎛ ⎞⎛ ⎞⎜ ⎟= − ⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠⎝ ⎠
…
o Then the unconditional probability of correct symbol detection is
( )
( )
( )( ) ( )
1
1
11 1
0
12
11 1
00 0
21
12
1 1 1 21 exp22 2 2
1 11 exp 222
M
cs c
M
s
Ms
cP Q p c dcN
cQ cNN N
Q u u du
−∞
−∞
−∞
−∞
∞−
−∞
⎛ ⎞⎛ ⎞⎜ ⎟= − ⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠⎝ ⎠
⎛ ⎞⎛ ⎞ ⎛ ⎞⎜ ⎟= − − −⎜ ⎟ ⎜ ⎟⎜ ⎟⎜ ⎟ π ⎝ ⎠⎝ ⎠⎝ ⎠
⎛ ⎞= − − − γ⎜ ⎟π ⎝ ⎠∫
∫
∫ E dc
Needs a numerical evaluation.
o The unconditional probability of symbol error (SER) is
1es csP P= −
4.2-25
• Now for the bit error rate.
o All errors equally likely, at 1 2 1
es esk
P PM
=− −
. No point in Gray coding.
o Can get i bit errors in equiprobable ways, so ki
⎛ ⎞⎜ ⎟⎝ ⎠
1 1
1
112 1 2 1
222 1
k ks s
eb k ki i
k
s sk
k kP PP i ki i
kk P P
= =
−
−⎛ ⎞ ⎛ ⎞= =⎜ ⎟ ⎜ ⎟−− −⎝ ⎠ ⎝ ⎠
= ≈−
∑ ∑
About half the bits are in error in the average symbol error.
• The SER for orthogonal signals is striking:
2 0 2 4 6 8 10 12 14 161 .10 6
1 .10 5
1 .10 4
1 .10 3
0.01
0.1
1M = 2M = 4M = 8M = 16
Symbol Error Prob, Orthogonal Signals
SNR per bit, gamma_b (dB)
prob
of s
ymbo
l err
or
The SER improves as we
add bits!
But the bandwidth
doubles with each
additional bit.
4.2-26
• Why does SER drop as M increases?
o All points are separated by the same 22log ( ) bd M= E
M
. Since
increases linearly with
2d
2log ( )k = , the pairwise error probability
(between two specific points) decreases exponentially with k, the
number of bits per symbol.
o Offsetting this is the number M-1 of neighbours of any point: M-1
ways of getting it wrong, and M increases exponentially with k.
o Which effect wins? Can’t get much analytical traction from the
integral expression two pages back. So fall back to a union bound.
• Union bound analysis:
o Without loss of generality, assume was transmitted. 1s
o Define as the event that r is closer to than to . , 2, ,mE m M= … ms 1s
4.2-27
o Note that is a pairwise error. It does not imply that the receiver’s
decision is m. In fact, we have events
mE
2 , , ME E… and they are not
mutually exclusive, since r could lie closer to two or more points than
to : 1s
o The overall error event 2 3 ME E E E= ∪ ∪ ∪… , so the SER is bounded
by the sum of the pairwise event probabilities1
[ ] [ ] [ ]
( )
2 32
20
ln(2)2 2
2( 1) ( 1) log ( )2
122
b
b
M
es M mm
b
kkk
P P E P E E E P E
dM Q M Q MN
e e
=
γ⎛ ⎞− −⎜ ⎟− γ ⎝ ⎠
= = ∪ ∪ ∪ ≤
⎛ ⎞= − = − γ⎜ ⎟⎜ ⎟
⎝ ⎠
≤ <
∑…
1 A general expansion is
[ ] [ ]1 21 1
1 1 1 etc.
M M M
M i ii i i j
j i
M M M
i j ki j k
j i k jk i
P A A A P A P A A
P A A A
= = =≠
= = =≠ ≠
≠
j⎡ ⎤∪ ∪ ∪ = − ∩⎣ ⎦
⎡ ⎤+ ∩ ∩ −⎣ ⎦
∑ ∑∑
∑∑∑
…
4.2-28
o From the bound, if ( )2ln 2 1.386bγ > = , then loading more bits onto a
symbol causes the upper bound on SER to drop exponentially to zero.
o Conversely, if ( )2ln 2bγ < , increasing k causes the upper bound on
SER to rise exponentially (SER itself saturates at 1).
o Too bad the dimensionality, the number of correlators and the
bandwidth also increase exponentially with k.
• This scheme is called “block orthogonal coding.” Thresholds are
characteristic of coded systems.
• Remember that this analysis is based on bounds…
2 0 2 4 6 8 10 12 14 161 .10 6
1 .10 5
1 .10 4
1 .10 3
0.01
0.1
1
10
M = 2M = 2M = 4M = 4M = 8M = 8M = 16M = 16
SER and Bound, Orthogonal Signals
SNR per bit, gamma_b (dB)
prob
of s
ymbo
l err
or
True SERs are solid lines,
bounds are dashed.
Upper bound is useful to
show decreasing SER, but is
too loose for a good
approximation.
4.2-29
Biorthogonal Signals
• If you have coherent detection, why waste the other side of the axis?
Double the number of signal points (add one bit) with sE± . Now
2M N= , with no increase in bandwidth. Or keep the same M and cut the
bandwidth in half.
• Energy and distance:
All signal points are equidistant from is
min2i k sE d− = =s s (the same as orthogonal signals)
except one – the reflection through the origin – which is farther away
2i i E−− =s s s
4.2-30
• Symbol error rate:
The probability of error is messy, but
the union bound is easy. A signal is
equidistant from all other signals but its
own complement.
o So the union bound on SER is
( ) ( )( )( )2
( 2) 2
( 2) second term redundant (Sec'n 4.5.4)
( 2) log ( )
s s s
s
b
P M Q Q
M Q
M Q M
≤ − γ + γ
≤ − γ
= − γ
which is slightly less than orthogonal signaling, for the same M and
same . The major benefit is that biorthogonal needs only half the
bandwidth of orthogonal, since it has half the number of dimensions.
bγ
o And the bit error probability is
2log ( )2b s
MP P≈
4.2-31
Simplex Signals
• The orthogonal signals can be seen as a mean values, shared by all, and
signal-dependent increments:
The mean signal (the centroid) is 1
1 M
mmM =
= ∑s s .
o The mean does not contribute to distinguishability of signals – so why
not subtract it?
m m′ = −s s s
11
1 1 location
1
m s
MM
EM m
M
−⎡ ⎤⎢ ⎥−⎢ ⎥⎢ ⎥
′ = ⎢ ⎥− ←⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦
s
where sE is the original signal energy.
4.2-32
o Removing the mean lowers the dimensionality by one. After rotation
into tidier coordinates:
• They still have equal energy (though no longer orthogonal). That energy is
22
1 21 1s m sE E M E 1M MM
⎛ ⎞ ⎛′ ′= = + − = −⎜ ⎟ ⎜⎝ ⎠ ⎝
s ⎞⎟⎠
The energy saving is small for larger M.
4.2-33
• The correlation among signals is:
[ ] ( )1
, 1Re 1 11
m nmn
m n
MM
M
−′ ′ρ = = = −
′ ′⋅ −−
s ss s
for m n≠
A uniform negative correlation.
• The SER is easy. Translation doesn’t change the error rate, so the SER is
that of orthogonal signals with a (M M 1)− SNR boost.
4.2.5 Vertices of a Hypercube and Generalizations
• More signals defined on a multidimensional space. Unlike orthogonal, we
will now allow more than one dimension to be used in a symbol.
• Vertices of a hypercube is straightforward: two-level PAM (binary
antipodal) on each of the N dimensions:
o With the Fourier basis, it is BPSK on each frequency at once – a
simple OFDM
o With the time translate basis, it is a classical NRZ transmission:
4.2-34
o Signal vectors:
1
2
m
mm
mN
ss
s
⎡ ⎤⎢ ⎥⎢=⎢⎢ ⎥⎣ ⎦
s ⎥⎥
with mn ss E= ± N
for and 1,m M= … 2NM = .
o In space, it looks like this
o Every point has the same distance from the origin, hence the same
energy
2
2log ( )m s bE M E N= = =s bE
4.2-35
o Minimum distance occurs for differences in a single coordinate:
11111
sm
EN
⎡ ⎤⎢ ⎥⎢ ⎥
= ⎢⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
s ⎥
111
11
sn
EN
⎡ ⎤⎢ ⎥⎢ ⎥
= ⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
s
min 2 2sm n
Ed EN
= − = =s s b
More generally, for Hd disagreements (Hamming distance),
2 2H sm n H
d E d EN
− = =s s b
o What are the effects on and bandwidth if we increase the number
of bits, keeping fixed?
mind
bE
• It’s easy to generalize from binary to PAM, PSK, QAM, etc. on each of the
dimensions:
o With the Fourier basis, it is OFDM
o With time translates, it is the usual serial transmission.
4.2-36
• Error analysis:
All points have same energy
s bE N E=
Euclidean distance for Hamming
distance h is 2 bd h= E
o If labeling is done independently on different dimensions (see sketch),
then the independence of the noise causes it to decompose to N
independent detectors. So the following structure
( )( )2
2
log ( ) 2
s b
b
P N Q
M Q
≤ γ
= γ
bP depends on labels
becomes
sP is meaningless
( )2b bP Q= γ , just
binary antipodal
4.2-37
• Generalization: user multilevel signals (PAM) in each dimension. Again, if
labeling is independent by dimension, then it’s just independent and
parallel use, like QAM. Not too exciting. Yet. These signals form a finite
lattice.
• Generalization: use a subset of the cube vertices, with points selected for
greater minimum distance. This is a binary block code.
o Advantage: greater Euclidean distance
o Disadvantage: lower data rate