83
Digital Communication Exercises Contents 1 Converting a Digital Signal to an Analog Signal 2 2 Decision Criteria and Hypothesis Testing 8 3 Generalized Decision Criteria 14 4 Vector Communication Channels 17 5 Signal Space Representation 22 6 Optimal Receiver for the Waveform Channel 29 7 The Probability of Error 36 8 Bit Error Probability 44 9 Connection with the Concept of Capacity 52 10 Continuous Phase Modulations 56 11 Colored AGN Channel 60 12 ISI Channels and MLSE 65 13 Equalization 71 14 Non-Coherent Reception 77 1

תרגילים היילרן

Embed Size (px)

DESCRIPTION

highlearn

Citation preview

Page 1: תרגילים היילרן

Digital Communication Exercises

Contents

1 Converting a Digital Signal to an Analog Signal 2

2 Decision Criteria and Hypothesis Testing 8

3 Generalized Decision Criteria 14

4 Vector Communication Channels 17

5 Signal Space Representation 22

6 Optimal Receiver for the Waveform Channel 29

7 The Probability of Error 36

8 Bit Error Probability 44

9 Connection with the Concept of Capacity 52

10 Continuous Phase Modulations 56

11 Colored AGN Channel 60

12 ISI Channels and MLSE 65

13 Equalization 71

14 Non-Coherent Reception 77

1

Page 2: תרגילים היילרן

1 Converting a Digital Signal to an Analog Signal

1. [1, Problem 4.15].Consider a four-phase PSK signal represented by the equivalent lowpass signal

u(t) =∑n

Ing(t− nT )

where In takes on one of of the four possible values√

1/2(±1 ± j) with equal probability. Thesequence of information symbols {In} is statistically independent (i.i.d).

(a) Determine the power density spectrum of u(t) when

g(t) =

{A, 0 ≤ t ≤ T,0, otherwise.

(b) Repeat (1a) when

g(t) =

{A sin(πt/T ), 0 ≤ t ≤ T,0, otherwise.

(c) Compare the spectra obtained in (1a) and (1b) in terms of the 3dB bandwidth and thebandwidth to the first spectral zero. Here you may find the frequency numerically.

Solution:

We have that SU (f) = 1T |G(f)|2

∑∞m=−∞ CI(m)e−j2πfmT , E(In) = 0, E(|In|2) = 1, hence

CI(m) =

{1, m = 0,

0, m 6= 0.

therefore∑∞m=−∞ CI(m)e−j2πfmT = 1⇒ SU (f) = 1

T |G(f)|2.

(a) For the rectangular pulse:

G(f) = ATsinπfT

πfTe−j2πfT/2 ⇒ |G(f)|2 = A2T 2 sin2 πfT

(πfT )2

where the factor e−j2πfT/2 is due to the T/2 shift of the rectangular pulse from the center.Hence:

SU (f) = A2Tsin2 πfT

(πfT )2

(b) For the sinusoidal pulse: G(f) =∫ T

0A sin(πt/T ) exp(−j2πft)dt. By using the trigonometric

identity sinx = exp(jx)−exp(−jx)2j it is easily shown that:

G(f) =2AT

π

cosπTf

1− 4T 2f2e−j2πfT/2 ⇒ |G(f)|2 =

(2AT

π

)2cos2 πTf

(1− 4T 2f2)2

Hence:

SU (f) =

(2A

π

)2

Tcos2 πTf

(1− 4T 2f2)2

2

Page 3: תרגילים היילרן

(c) The 3dB frequency for (1a) is:

sin2 πf3dBT

(πf3dBT )2=

1

2⇒ f3dB

∼=0.44

T

(where this solution is obtained graphically), while the 3dB frequency for the sinusoidal pulseon (1b) is: f3dB

∼= 0.59T .

The rectangular pulse spectrum has the first spectral null at f = 1/T , whereas the spectrumof the sinusoidal pulse has the first null at 3/2T . Clearly the spectrum for the rectangularpulse has a narrower main lobe. However, it has higher sidelobes.

2. [1, Problem 4.21].The lowpass equivalent representation of a PAM signal is

u(t) =∑n

Ing(t− nT )

Suppose g(t) is a rectangular pulse and

In = an − an−2

where {an} is a sequence of uncorrelated 1 binary values (1,−1) random variables that occur withequal probability.

(a) Determine the autocorrelation function of the sequence {In}.(b) Determine the power density spectrum of u(t).

(c) Repeat (2b) if the possible values of an are (0, 1).

Solution:

(a)

CI(m) = E{In+mIn} = E{(an+m − an+m−2)(an − an−2)}

=

2, m = 0,

−1, m = ±2

0, otherwise.

= 2δ(m)− δ(m− 2)− δ(m+ 2)

(b) SU (f) = 1T |G(f)|2

∑∞m=−∞ CI(m)e−j2πfmT , where

∞∑m=−∞

CI(m)e−j2πfmT = 4 sin2 2πfT,

and

|G(f)|2 = (AT )2

(sinπfT

πfT

)2

.

Therefore:

SU (f) = 4A2T

(sinπfT

πfT

)2

sin2 2πfT

1E{anam} = 0 for n 6= m.

3

Page 4: תרגילים היילרן

(c) If {an} takes the values (0, 1) with equal probability then E{an} = 1/2. This results in:

CI(m) =1

2[2δ(m)− δ(m− 2)− δ(m+ 2)]⇒ Φii(f) = 2 sin2 2πfT

SU (f) = 2A2T

(sinπfT

πfT

)2

sin2 2πfT

Thus, we obtain the same result as in (2b) but the magnitude of the various quantities isreduced by a factor of 2.

3. [2, Problem 1.16].A zero mean stationary process x(t) is applied to a linear filter whose impulse response is definedby a truncated exponential:

h(t) =

{ae−at, 0 ≤ t ≤ T,0, otherwise.

Show that the power spectral density (PSD) of the filter output y(t) is defined by

SY (f) =a2

a2 + 4π2f2(1− 2 exp(−aT ) cos 2πfT + exp(−2aT ))SX(f)

where SX(f) is the PSD of the filter input.

Solution:

The frequency response of the filter is:

H(f) =

∫ ∞−∞

h(t) exp(−j2πft)dt

=

∫ ∞−∞

a exp(−at) exp(−j2πft)dt

= a

∫ ∞−∞

exp(−(a+ j2πf)t)dt

=a

a+ j2πf[1− e−aT (cos 2πfT − j sin 2πfT )].

The squared magnitude response is:

|H(f)|2 =a2

a2 + 4π2f2

(1− 2e−aT cos 2πfT + e−2aT

)And the required PSD follows.

4. [1, Problem 4.32].The information sequence {an} is a sequence of i.i.d random variables, each taking values +1 and−1 with equal probability. This sequence is to be transmitted at baseband by a biphase codingscheme, described by

s(t) =∑n

ang(t− nT )

where g(t) is defined by

g(t) =

{1, 0 ≤ t ≤ T/2,−1, T/2 ≤ t ≤ T.

4

Page 5: תרגילים היילרן

(a) Find the PSD of s(t).

(b) Assume that it is desirable to have a zero in the power spectrum at f = 1/T . To this endwe use precoding scheme by introducing bn = an + kan−1, where k is some constant, andthen transmit the {bn} sequence using the same g(t). Is it possible to choose k to produce afrequency null at f = 1/T? If yes, what are the appropriate value and the resulting powerspectrum?

(c) Now assume we want to to have zeros at all multiples of f0 = 1/4T . Is it possibl to havethese zeros with an appropriate choice of k in the previous part? If not then what kind ofprecoding do you suggest to result in the desired nulls?

Solution:

(a) Since µ = 0, σ2a = 1, we have SS(f) = 1

T |G(f)|2.

G(f) =T

2

sin(πfT/2)

πfT/2e−j2πfT/4 − T

2

sin(πfT/2)

πfT/2e−j2πf3T/4

=T

2

sin(πfT/2)

πfT/2e−j2πfT (2j sin(πfT/2))

= jTsin2(πfT/2)

πfT/2e−j2πfT ⇒

|G(f)|2 = T 2

(sin2(πfT/2)

πfT/2

)2

SS(f) = T

(sin2(πfT/2)

πfT/2

)2

(b) For non-independent information sequence the power spectrum of s(t) is given by SS(f) =1T |G(f)|2

∑∞m=−∞ CB(m)e−j2πfmT .

CB(m) = E{bn+mbn}= E{an+man}+ kE{an+m−1an}+ kE{an+man−1}+ k2E{an+m−1an−1}

=

1 + k2, m = 0,

k, m = ±1

0, otherwise.

Hence:∞∑

m=−∞CB(m)e−j2πfmT = 1 + k2 + 2k cos 2πfT

We want:

SS(1/T ) = 0⇒∞∑

m=−∞CB(m)e−j2πfmT

∣∣f=1/T

= 0⇒ 1 + k2 + 2k = 0⇒ k = −1

and the resulting power spectrum is:

SS(f) = 4T

(sin2 πfT/2

πfT/2

)2

sin2 πfT

5

Page 6: תרגילים היילרן

(c) The requirement for zeros at f = l/4T, l = ±1,±2, . . . means 1 + k2 + 2k cosπl/2 = 0,which cannot be satisfied for all l. We can avoid that by using precoding in the form:bn = an + kan−4. Then

CB(m) =

1 + k2, m = 0,

k, m = ±4

0, otherwise.∞∑

m=−∞CB(m)e−j2πfmT = 1 + k2 + 2k cos 2πf4T

and a value of k = −1 will zero this spectrum in all multiples of 1/4T .

5. [1, Problem 4.29].Show that 16-QAM on {±1,±3}×{±1,±3} can be represented as a superposition of two 4-QAMsignals where each component is amplified separately before summing. i.e., let

s(t) = G[An cos 2πft+Bn sin 2πft] + [Cn cos 2πft+Dn sin 2πft]

where {An}, {Bn}, {Cn} and {Dn} are statically independent binary sequences with element fromthe set {+1,−1} and G is the amplifier gain. You need to show s(t) an also be written as

s(t) = In cos 2πft+Qn sin 2πft

and determine In and Qn in terms of An, Bn, Cn and Dn.

Solution:

The 16-QAM signal is represented as s(t) = In cos 2πft+Qn sin 2πft where In = {±1,±3}, Qn ={±1,±3}. A superposition of two 4-QAM signals is:

s(t) = G[An cos 2πft+Bn sin 2πft] + [Cn cos 2πft+Dn sin 2πft]

where An, Bn, Cn, Dn = {±1}. Clearly In = GAn +Cn, In = GBn +Dn. From these equations itis easy to see that G = 2 gives the required equivalence.

6. [2, Problem 1.15].A running integrator is defined by

y(t) =

∫ t

t−Tx(τ)dτ,

where x(t) is the input, y(t) is the output, and T is the integration period. Both x(t) and y(t)are sample functions of stationary processes X(t) and Y (t), respectively. Show that the powerspectral density (PSD) of the integrator output is related to that of the integrator input by

SY (f) = T 2sinc2(πfT )SX(f).

Remark 1. sinc(x) = sin(x)x .

Solution:

First, we will find the impulse response, h(t), of the running integrator

h(t) =

∫ t

t−Tδ(τ)dτ =

{1, 0 ≤ t ≤ T,0, otherwise.

6

Page 7: תרגילים היילרן

Correspondingly, the frequency response of the running integrator is

H(f) =

∫ ∞−∞

h(t)e−j2πftdt

=

∫ T

0

e−j2πftdt

=1

j2πf

(1− e−j2πfT

)= T sinc(πfT )e−jπfT .

Hence, the PSD SY (f) is defined in terms of the PSD SX(f) as follows

SY (f) = |H(f)|2 SX(f)

= T 2sinc2(πfT )SX(f).

7

Page 8: תרגילים היילרן

2 Decision Criteria and Hypothesis Testing

Remark 2. Hypothesis testing is another common name for decision problem: You have to decidebetween two or more hypothesis, say H0, H1, H2, . . . where Hi can be interpreted as ”the unknownparameter has value i”. Decoding a constellation with K symbols can be interpreted as selecting thecorrect hypothesis from H0, H1, . . . ,HK−1 where Hi is the hypothesis that Si was transmitted.

1. Consider an equal probability binary source p(0) = p(1) = 1/2, and a continuous output channel:

fR|M (r|”1”) = ae−ar r ≥ 0

fR|M (r|”0”) = be−br r ≥ 0 b > a > 0

(a) Find a constant K such that the optimal decision rule is r1≷0K.

(b) Find the respective error probability.

Solution:

(a) Optimal decision rule:

p(0)fR|M (r|”0”)0≷1p(1)fR|M (r|”1”)

Using the defined channel distributions:

be−br0≷1

ae−ar

10≷1

a

be−(a−b)r

00≷1

ln(a

b) + (b− a)r

r1≷0

ln(ab )

a− b= K

(b)

p(e) = p(0) Pr{r > K|0}+ p(1) Pr{r < K|1}

=1

2

[ ∫ ∞K

be−btdt+

∫ K

0

ae−atdt]

=1

2[e−bK + 1− e−aK ]

2. Consider a binary source: Pr{x = −2} = 2/3,Pr{x = 1} = 1/3, and the following channel

y = A · x, A ∼ N(1, 1)

where x and A are independent.

8

Page 9: תרגילים היילרן

(a) Find the optimal decision rule.

(b) Calculate the respective error probability.

Solution:

(a) First we will find the conditional distribution of y given x:

(Y | − 2) ∼ N(−2, 4), (Y |1) ∼ N(1, 1)

Hence the decision rule will be:

2

3

1√8π

exp(− (y + 2)2

8

) −2≷1

1

3

1√2π

exp(− (y − 1)2

2

)

− (y + 2)2

8

−2≷1− (y − 1)2

2

3y(y − 4)−2≷1

0

⇒ x(y) =

{−2, y < 0, 4 < y

1, otherwise.

(b)

p(e) =2

3

∫ 4

0

f(y| − 2)dy +1

3

[ ∫ 0

−∞f(y|1)dy +

∫ ∞4

f(y|1)dy

]=

2

3

[Q

(0 + 2

2

)−Q

(4 + 2

2

)]+

1

3

[1−Q

(0− 1

1

)−Q

(4− 1

1

)]= Q(1)− 1

3Q(3) ∼= 0.15821

3. Decision rules for binary channels.

(a) The Binary Symmetric Channel (BSC) has binary (0 or 1) inputs and outputs. Itoutputs each bit correctly with probability 1− p and incorrectly with probability p. Assume0 and 1 are equally likely inputs. State the MAP and ML decision rules for the BSC whenp < 1

2 . How are the decision rules different when p > 12?

(b) The Binary Erasure Channel (BEC) has binary inputs as with the BSC. However thereare three possible outputs. Given an input of 0, the output is 0 with probability 1−p1 and 2with probability p1. Given an input of 1, the output is 1 with probability 1− p2 and 2 withprobability p2. Assume 0 and 1 are equally likely inputs. State the MAP and ML decisionrules for the BEC when p1 < p2 <

12 . How are the decision rules different when p2 < p1 <

12?

Solution:

(a) For equally likely inputs the MAP and ML decision rules are identical. In each case we wishto maximize py|x(y|xi) over the possible choices for xi. The decision rules are shown below,

p <1

2⇒ X = Y

p >1

2⇒ X = 1− Y

9

Page 10: תרגילים היילרן

(b) Again, since we have equiprobable signals, the MAP and ML decision rules are the same.The decision rules are as follows,

p1 < p2 <1

2⇒ X =

{Y, Y = 0, 1.

1, Y = 2.

p2 < p1 <1

2⇒ X =

{Y, Y = 0, 1.

0, Y = 2.

4. In a binary hypothesis testing problem, the observation Z is Rayleigh distributed under bothhypotheses with different parameter, that is,

f(z|Hi) =z

σ2i

exp

(− z2

2σ2i

)z ≥ 0, i = 0, 1

You need to decide if the observed variable Z was generated with σ20 or with σ2

1 , namely choosebetween H0 and H1.

(a) Obtain the decision rule for the minimum probability of error criterion. Assume that the H0

and H1 are equiprobable.

(b) Extend your results to N independent observations, and derive the expressions for the re-sulting probability of error.Note: If R ∼ Rayleigh(σ) then

∑Ni=1R

2i has a gamma distribution with parameters N and

2σ2: Y =∑Ni=1R

2i ∼ Γ(N, 2σ2).

Solution:

(a)

log(f(z|Hi)) = log z − log σ2i −

z2

2σ2i

⇒ log f(z|H1)− log f(z|H0) = logσ2

0

σ21

+ z2

(1

2σ20

− 1

2σ21

)H1

≷H0

0

⇒ z2H1

≷H0

2 log

(σ2

1

σ20

)·(

σ21σ

20

σ21 − σ2

0

)= γ

Since z ≥ 0, the following decision rule is obtained:

H =

{H1, z ≥ √γ,H0, z <

√γ.

(b) Let f(z|H1)f(z|H0) be denoted as Likelihood Ration Test (LRT) 2, hence

log LRT = log f(z|H1)− log f(z|H0)

2LRT , f(z|H1)f(z|H0)

10

Page 11: תרגילים היילרן

For N i.i.d observations:

log(f(z|Hi)) =

N−1∑n=0

log(f(zn|Hi))

=

N−1∑n=0

log zn −N log σ2i −

z2n

2σ2i

= −N log σ2i +

N−1∑n=0

log zn −z2n

2σ2i

The log LRT will be:

log LRT : −N log

(σ2

1

σ20

)+

(1

2σ20

− 1

2σ21

)N−1∑n=0

z2n

H1

≷H0

0

⇒N−1∑n=0

z2n

H1

≷H0

2N log

(σ2

1

σ20

)·(

σ21σ

20

σ21 − σ2

0

)= γ

Define Y =∑N−1n=0 z

2n, then Y |Hi ∼ Γ(N, 2σ2

i ).

PFA = Pr{decoding H1 if H0 was transmitted} = Pr{Y > γ|H0} = 1− γ(N, γ/2σ20)

Γ(N)

PM = Pr{decoding H0 if H1 was transmitted} = 1− Pr{Y > γ|H1} =γ(N, γ/2σ2

1)

Γ(N)

where γ(K,x/θ) is the lower incomplete gamma function 3.

5. Consider a binary source: Pr{x = 0} = 1/3,Pr{x = 1} = 2/3, which can be transmitted usingone of the channels depicted in Figure 1. It is given that p < 1

2 , q <12 . Find the optimal decision

rule and the respective error probability for the following cases:

(a) The source is transmitted using channel 1.

(b) The source is transmitted using channel 2.

(c) The source is transmitted using channel 1 with probability α, and using channel 2 withprobability 1− α.

(d) What is the value of α which achieves the minimal error probability (α as a function of pand q)? what is the respective error probability?

(a) Channel 1 (b) Channel 2

Figure 1: Two channels for the transmission of the source X.

3γ(s, x) =∫ x0 ts−1e−tdt.

11

Page 12: תרגילים היילרן

Solution:

(a) For a channel output y = 0:

Pr {x = 0} · Pr {y = 0|x = 0} =1

3(1− p)

Pr {x = 1} · Pr {y = 0|x = 1} = 0.

Hence for y = 0 the decision is x = 0. For a channel output y = 1:

Pr {x = 0} · Pr {y = 1|x = 0} =1

3p

Pr {x = 1} · Pr {y = 1|x = 1} =2

3.

Hence for y = 1 the decision is x = 1. The error probability is

p(e) = Pr {x = 0} · Pr {y = 1|x = 0} =1

3p.

(b) For a channel output y = 0:

Pr {x = 0} · Pr {y = 0|x = 0} =1

3

Pr {x = 1} · Pr {y = 0|x = 1} =2

3q.

Hence for y = 0 the decision is x = 0. Using the same arguments as in the previous item, fora channel output y = 1 the decision is x = 1, and the error probability is p(e) = 2

3q.

(c) The equivalent channel is depicted in Figure 2.

Figure 2: The equivalent channel.

For a channel output y = 0:

Pr {x = 0} · Pr {y = 0|x = 0} =1

3(1− αp)

Pr {x = 1} · Pr {y = 0|x = 1} =2

3q(1− α).

Since (1− α) < (1− αp) and q < 12 , we get

1

3(1− αp) > 2

3q(1− α)⇒ x = 0.

For a channel output y = 1:

Pr {x = 0} · Pr {y = 1|x = 0} =1

3αp

Pr {x = 1} · Pr {y = 1|x = 1} =2

3(1− q(1− α)) .

12

Page 13: תרגילים היילרן

Since 12 < (1− q(1− α)) and αp < 1

2 , the decision is x = 1. The error probability is

p(e) =1

3αp+

2

3q(1− α) =

2

3q +

α

3(p− 2q).

(d) As p(e) is linear with the respect to α, its minimal value is achieved in one of the edge points.For 2q < p the minimal error probability, p(e) = 2

3q, is achieved for α = 0. For p ≤ 2q theminimal error probability, p(e) = 1

3p, is achieved for α = 1.

13

Page 14: תרגילים היילרן

3 Generalized Decision Criteria

Remark 3. Vectors are denoted with boldface letters, e.g. x, y.

1. Bayes decision criteria.Consider an equiprobable binary symmetric source m ∈ {0, 1}. For the observation, R, conditionalprobability density function is

fR|M (r|M = 0) =

{12 , |r| < 1,

0, otherwise.

fR|M (r|M = 1) =1

2e−|r|

(a) Obtain the decision rule for the minimum probability of error criterion and the correspond-ingly minimal probability of error.

(b) For the cost matrix C =

[0 2αα 0

], obtain the optimal generalized decision rule and the error

probability.

Solution:

(a)

|r| > 1 : fR|M (r|M = 0)⇒ m = 1.

|r| < 1 :12e−|r|

12

1≷0

1⇒ −|r|1≷0

0⇒ m = 0

The probability of error

p(e) = p(0) · 0 + p(1) ·∫ 1

−1

1

2e−|r|dr =

1

2[1− e−1]

(b) The decision rule

fR|M (r|M = 1)

fR|M (r|M = 0)

1≷0

p(0)

p(1)· C10 − C00

C01 − C11=

α

2α=

1

2

|r| > 1 : fR|M (r|M = 0) = 0

|r| < 1 :12e−|r|

12

1≷0

1

2⇒ −|r|

1≷0− ln 2

⇒ m =

{1, |r| < ln 2, |r| > 1

0, ln 2 < |r| < 1.

Probability of error

PFA = Pr{m = 1|m = 0} =

∫ ln 2

− ln 2

1

2dr = ln 2

PM = Pr{m = 0|m = 1} =

∫ln 2<|r|<1

1

2− e−|r|dr =

1

2− e−1

p(e) = p(0)PFA + p(1)PM =1

2[ln 2 +

1

2− e−1]

14

Page 15: תרגילים היילרן

2. Non Gaussian additive noise.

Consider the source m ∈ {1,−1},Pr{m = 1} = 0.9,Pr{m = −1} = 0.1. The observation, y, obeys

y = m+N, N ∼ U [−2, 2]

(a) Obtain the decision rule for the minimum probability of error criterion and the minimalprobability of error.

(b) For the cost matrix C =

[0 1

100 0

], obtain the optimal Bayes decision rule and the error

probability.

Solution:

(a)

f(y|1) =

{14 , −1 < y < 3,

0, otherwise.

f(y| − 1) =

{14 , −3 < y < 1,

0, otherwise.

⇒ m =

{−1, −3 < y < −1,

1, −1 < y < 3.

The probability of error

p(e) = p(1) · 0 + p(−1) ·∫ 1

−1

1

4dy = 0.05

(b) The decision rule

f(y|1)

f(y| − 1)

1≷−1

p(−1)

p(1)· 100

1

⇒ p(1)f(y|1)1≷−1

100p(−1)f(y| − 1)

⇒ m =

{−1, −3 < y < 1,

1, 1 < y < 3.

The probability of error

p(e) = p(−1) · 0 + p(1) ·∫ 1

−1

1

4dy = 0.45

3. A binary digital communication system transmits one of the following two symbols S0 = 0, S1 = A,M consecutive times using a zero mean AWGN channel with variance σ2. The receiver de-cide which symbol was transmitted based on the corresponding M received symbols {ri}, i =

1, 2, . . . ,M . The symbols a-priori probabilities obey p(S0)p(S1) = η, while the receiver uses the cost

matrix C =

[C00 C01

C10 C11

].

15

Page 16: תרגילים היילרן

(a) Find the conditional PDFs fR|S(r|S0), fR|S(r|S1).

(b) Use Bayes criterion to show that the optimal decision criterion is

1

M

M∑i=1

ri

S1

≷S0

γ,

and find γ.

Solution:

(a)

fR|S(r|S0) =

M∏i=1

1√2πσ2

e−r2i2σ2 , fR|S(r|S1) =

M∏i=1

1√2πσ2

e−(ri−A)2

2σ2 .

(b) Bayes optimal decision criterion is

fR|S(r|S1)

fR|S(r|S0)

S1

≷S0

ηC10 − C00

C01 − C11

⇒M∏i=1

e−A2+2riA

2σ2

S1

≷S0

ηC10 − C00

C01 − C11

⇒M∑i=1

−A2 + 2riA

2σ2

S1

≷S0

ln

(ηC10 − C00

C01 − C11

)

⇒M∑i=1

riA

σ2

S1

≷S0

MA2

2σ2+ ln

(ηC10 − C00

C01 − C11

)

⇒ 1

M

M∑i=1

ri

S1

≷S0

A

2+

σ2

MA· ln(ηC10 − C00

C01 − C11

)≡ γ.

16

Page 17: תרגילים היילרן

4 Vector Communication Channels

1. General Gaussian vector channel.Consider the Gaussian vector channel with the sources p(m0) = q, p(m1) = 1−q, s0 = [1, 1]T , s1 =[−1,−1]T . For sending m0 the transmitter sends s0 and for sending m1 the transmitter sends s1.The observations, ri, obeys

r = si + n n = [n1, n2],n ∼ N(0,Λn),Λn =

[σ2

1 00 σ2

2

]The noise vector, n, and the messages mi are independent.

(a) Obtain the optimal decision rule using MAP criterion, and examine it for the following cases:

i. q = 12 , σ1 = σ2.

ii. q = 12 , σ

21 = 2σ2

2 .

iii. q = 13 , σ

21 = 2σ2

2 .

(b) Derive the error probability for the obtained decision rule.

Solution:

(a) The conditional probability distribution function R|Si ∼ N(Si,Λn):

f(r|si) =1√

(2π)2 det Λnexp

{1

2(r− si)

TΛ−1n (r− si)

}The MAP optimal decision rule

p(m0)f(r|s0)m0

≷m1

p(m1)f(r|s1)

q√(2π)2 det Λn

exp

{1

2(r− s0)TΛ−1

n (r− s0)

} m0

≷m1

1− q√(2π)2 det Λn

exp

{1

2(r− s1)TΛ−1

n (r− s1)

}

q exp

{1

2(r− s0)TΛ−1

n (r− s0)

} m0

≷m1

(1− q) exp

{1

2(r− s1)TΛ−1

n (r− s1)

}

(r− s1)TΛ−1n (r− s1)− (r− s0)TΛ−1

n (r− s0)m0

≷m1

2 ln1− qq

Assign rT = [x, y]

(x+ 1)2

σ21

+(y + 1)2

σ22

− (x− 1)2

σ21

− (y − 1)2

σ22

m0

≷m1

2 ln1− qq

⇒ x

σ21

+y

σ22

m0

≷m1

1

2ln

1− qq

17

Page 18: תרגילים היילרן

i. For the case q = 12 , σ1 = σ2 the decision rule becomes

x+ ym0

≷m1

0

ii. For the case q = 12 , σ

21 = 2σ2

2 the decision rule becomes

x+ 2ym0

≷m1

0

iii. For the case q = 13 , σ

21 = 2σ2

2 the decision rule becomes

x+ 2ym0

≷m1

σ22 ln 2

(b) Denote K , 12 ln 1−q

q , and define z = xσ21

+ yσ22. The conditional distribution of Z is

Z|si ∼ N((−1)iσ2

1 + σ22

σ21σ

22

,σ2

1 + σ22

σ21σ

22

), i = 0, 1

The decision rule in terms of z,K

zm0

≷m1

K

The error probability

p(e) = p(m0) Pr{z < K|m0}+ p(m1) Pr{z > K|m1}

Assigning the conditional distribution

Pr{z < K|m0} = 1−Q(K − σ2

1+σ22

σ21σ

22√

σ21+σ2

2

σ21σ

22

)

Pr{z > K|m1} = Q

(K +σ21+σ2

2

σ21σ

22√

σ21+σ2

2

σ21σ

22

)

For the case q = 12 , σ1 = σ2 the error probability equals Q

(√2σ21

).

2. Non Gaussian additive vector channel.Consider a binary hypothesis testing problem in which the sources s0 = [1, 2, 3], s1 = [1,−1,−3]are equiprobable. The observations, ri, obey

r = si + n, n = [n0, n1, n2]

where the elements of n are i.i.d with the following probability density function

fNK (nk) =1

2e−|nk|

18

Page 19: תרגילים היילרן

Obtain the optimal decision rule using MAP criteria.

Solution:

The optimal decision rule using MAP criteria

p(s0)f(r|s0)0≷1

p(s1)f(r|s1)

f(r|s0)0≷1

f(r|s1)

The conditional probability distribution function

f(r|si) = fN(r− si) =

2∏k=0

fN (nk = rk − sik)

=1

2e−|r0−si,0|

1

2e−|r1−si,1|

1

2e−|r2−si,2| =

1

8e−[|r0−si,0|+|r1−si,1|+|r2−si,2|]

An assignment of the si elements yield

|r0 − 1|+ |r1 − 2|+ |r2 − 3|1≷0|r0 − 1|+ |r1 + 1|+ |r2 + 3|

|r1 − 2|+ |r2 − 3|1≷0|r1 + 1|+ |r2 + 3|

Note that the above decision rule compares the distance from the axis in both hypotheses, unlikein the Gaussian vector channel in which the Euclidean distance is compared.

3. Gaussian two-channel.Consider the following two-channel problem, in which the observations under the two hypothesesare

H0 :

[Z1

Z2

]=

[1 00 1

2

] [V1

V2

]+

[−1− 1

2

]H1 :

[Z1

Z2

]=

[1 00 1

2

] [V1

V2

]+

[112

]where V1 and V2 are independent, zero-mean Gaussin variables with variance σ2.

(a) Find the minimum probability of error receiver if both hypotheses are equally likely. Simplifythe receiver structure.

(b) Find the minimum probability of error.

Solution:

Let Z =

[Z1

Z2

]. The conditional distribution of Z is

Z|H0 ∼ N(µ0,Λ), Z|H1 ∼ N(µ1,Λ),

µ0 =

[−1− 1

2

], , µ1 =

[112

]Λ = σ2

[1 00 1

4

]

19

Page 20: תרגילים היילרן

(a) The decision rule

f(z|H1)

f(z|H1)

H1

≷H0

p(H0)

p(H1)

log f(z|H1)− log f(z|H0)H1

≷H0

0

2

σ2(z1 + 2z2)

H1

≷H0

0

⇒ z1 + 2z2

H1

≷H0

0

(b) Define X = Z1 + 2Z2. Since V1, V2 are independent Z1, Z2 are independent as well. A linearcombination of Z1, Z2 yia Gaussian R.V with the following parameters

E{X|H0} = −2, E{X|H1} = 2,

V ar{X|H0} = V ar{X|H1} = 2σ2

And the probability of error events

PFA = Pr{H = H1|H = H0} =

∫ ∞0

f(x|H1)dx,

PM = Pr{H = H0|H = H1} = 1−∫ ∞

0

f(x|H0)dx

4. Additive vector channel.Consider the additive vector channel with the equiprobable sources s0 = [1, 1]T , s1 = [−1,−1]T .The observations vector, r, obeys

r = si + n n = [n0, n1]T ,

where N0 ∼ N(0, σ2) and fN1(n1) = λ2 e−λ|n1|. The noise vector elements, n0, n1, and the sources

are all mutually independent.

(a) Find the conditional PDFs fR|S(r|s0), fR|S(r|s1).

(b) Find the log likelihood ratio.

(c) Find and draw the optimal decision regions (in the (r0, r1) plane) for λ = 12σ2 .

Solution:

(a) The noise vector PDF is

fN(n) =1√

2πσ2e−n2

02σ2 · λ

2e−λ|n1|,

and the conditional PDFs are

fR|S(r|s0) =1√

2πσ2e−(r0−1)2

2σ2 · λ2e−λ|r1−1|, fR|S(r|s1) =

1√2πσ2

e−(r0+1)2

2σ2 · λ2e−λ|r1+1|.

20

Page 21: תרגילים היילרן

(b) The log likelihood ratio

lnfR|S(r|s1)

fR|S(r|s0)=

1

2σ2

((r0 − 1)

2 − (r0 + 1)2)− λ (|r1 + 1| − |r1 − 1|)

= −2r0

σ2− λ (|r1 + 1| − |r1 − 1|) .

(c) As the a-priori probabilities are equal, and using λ = 12σ2 , the decision rule is

−2r0

σ2− 1

2σ2(|r1 + 1| − |r1 − 1|)

s1

≷s0

0

⇒ −|r1 + 1|+ |r1 − 1| − 4r0

s1

≷s0

0.

For r1 ≤ −1:

r1 + 1− r1 + 1− 4r0

s1

≷s0

0 ⇒ 2r0 − 1s0

≷s1

0.

For −1 ≤ r1 ≤ 1:

−(r1 + 1)− r1 + 1− 4r0

s1

≷s0

0 ⇒ 2r0 + r1

s0

≷s1

0.

For 1 ≤ r1:

−(r1 + 1) + r1 − 1− 4r0

s1

≷s0

0 ⇒ 2r0 + 1s0

≷s1

0.

The optimal decision regions are depicted in Figure 3.

Figure 3: Decision regions for Q4. Ik is the region for deciding on sk, k = 0, 1.

21

Page 22: תרגילים היילרן

5 Signal Space Representation

1. [1, Problem 4.9].Consider a set of M orthogonal signal waveforms sm(t), 1 ≤ m ≤M, 0 ≤ t ≤ T 4, all of which havethe same energy ε5. Define a new set of waveforms as

s′m(t) = sm(t)− 1

M

M∑k=1

sk(t), 1 ≤ m ≤M, 0 ≤ t ≤ T

Show that the M signal waveforms {s′m(t)} have equal energy, given by

ε′ = (M − 1)ε

M

and are equally correlated, with correlation coefficient

ρmn =1

ε′

∫ T

0

s′m(t)s′n(t)dt = − 1

M − 1

Solution:

The energy of the signal waveform s′m(t) is:

ε′ =

∫ ∞−∞|s′m(t)|2 dt =

∫ ∞−∞

∣∣∣∣sm(t)− 1

M

M∑k=1

sk(t)

∣∣∣∣2dt=

∫ T

0

s2m(t) +

1

M2

M∑k=1

M∑l=1

∫ T

0

sk(t)sl(t)dt−2

M

M∑k=1

∫ T

0

sm(t)sk(t)dt

= ε+1

M2

M∑k=1

M∑l=1

εδkl −2

= ε1

Mε− 2

Mε =

M − 1

The correlation coefficient is given by:

ρmn =1

ε′

∫ T

0

s′m(t)s′n(t)dt =

∫ T

0

(sm(t)− 1

M

M∑k=1

sk(t)

)(sn(t)− 1

M

M∑l=1

sl(t)

)dt

=1

ε′

(∫ T

0

sm(t)sn(t)dt+1

M2

M∑k=1

M∑l=1

∫ T

0

sk(t)sl(t)dt

)− 1

ε′2

M

M∑k=1

∫ T

0

sm(t)sk(t)dt

=1M2Mε− 2

M εM−1M ε

= − 1

M − 1

4〈sj(t), sk(t)〉 = 0, ∀j 6= k, j, k ∈ {1, 2, . . . ,M}.5The energy of the signal waveform sm(t) is: ε =

∫∞−∞ |sm(t)|2 dt

22

Page 23: תרגילים היילרן

2. [1, Problem 4.10].Consider the following three waveforms

f1(t) =

12 , 0 ≤ t < 2,

− 12 , 2 ≤ t < 4,

0, otherwise.

f2(t) =

{12 , 0 ≤ t < 4,

0, otherwise.

f3(t) =

12 , 0 ≤ t < 1, 2 ≤ t < 3

− 12 , 1 ≤ t < 2, 3 ≤ t < 4

0, otherwise.

(a) Show that these waveforms are orthonormal.

(b) Check if you can express x(t) as a weighted linear combination of fn(t), n = 1, 2, 3, if

x(t) =

−1, 0 < t < 1

1, 1 ≤ t < 3

−1, 3 ≤ t < 4

0, otherwise.

and if you can determine the weighting coefficients, otherwise explain.

Solution:

(a) To show that the waveforms fn(t), n = 1, 2, 3 are orthogonal we have to prove that:∫ ∞−∞

fn(t)fm(t)dt = 0. m 6= n

For n = 1,m = 2:∫ ∞−∞

f1(t)f2(t)dt =

∫ 4

0

f1(t)f2(t)dt

=

∫ 2

0

f1(t)f2(t)dt+

∫ 4

2

f1(t)f2(t)dt

=1

4

∫ 2

0

dt− 1

4

∫ 4

2

dt = 0

For n = 1,m = 3:∫ ∞−∞

f1(t)f3(t)dt =

∫ 4

0

f1(t)f3(t)dt

=1

4

∫ 1

0

dt− 1

4

∫ 2

1

dt− 1

4

∫ 3

2

dt+1

4

∫ 4

3

dt = 0

For n = 2,m = 3:∫ ∞−∞

f2(t)f3(t)dt =

∫ 4

0

f2(t)f3(t)dt

=1

4

∫ 1

0

dt− 1

4

∫ 2

1

dt+1

4

∫ 3

2

dt− 1

4

∫ 4

3

dt = 0

23

Page 24: תרגילים היילרן

Thus, the signals fn(t) are orthogonal. It is also straightforward to prove that the signalshave unit energy: ∫ ∞

−∞|fn(t)|2 dt = 1, n = 1, 2, 3

Hence, they are orthonormal.

(b) We first determine the weighting coefficients

xn =

∫ ∞−∞

x(t)fn(t)dt, n = 1, 2, 3

x1 =

∫ 4

0

x(t)f1(t)dt = −1

2

∫ 1

0

dt+1

2

∫ 2

1

dt− 1

2

∫ 3

2

dt+1

2

∫ 4

3

dt = 0

x2 =

∫ 4

0

x(t)f2(t)dt =1

2

∫ 4

0

x(t)dt = 0

x1 =

∫ 4

0

x(t)f1(t)dt = −1

2

∫ 1

0

dt− 1

2

∫ 2

1

dt+1

2

∫ 3

2

dt+1

2

∫ 4

3

dt = 0

As it is observed, x(t) is orthogonal to the signal waveforms fn(t), n = 1, 2, 3 and thus it cannot represented as a linear combination of these functions.

3. [1, Problem 4.11].Consider the following four waveforms

s1(t) =

2, 0 ≤ t < 1,

−1, 1 ≤ t < 4,

0, otherwise.

s2(t) =

−2, 0 ≤ t < 1,

1, 1 ≤ t < 3,

0, otherwise.

s3(t) =

1, 0 ≤ t < 1, 2 ≤ t < 3,

−1, 1 ≤ t < 2, 3 ≤ t < 4,

0, otherwise.

s4(t) =

1, 0 ≤ t < 1,

−2, 1 ≤ t < 3,

2, 3 ≤ t < 4,

0, otherwise.

(a) Determine the dimensionality of the waveforms and a set of basis functions.

(b) Use the basis functions to present the four waveforms by vectors s1, s2, s3 and s4.

(c) Determine the minimum distance between any pair of vectors.

Solution:

(a) As an orthonormal set of basis functions we consider the set

f1(t) =

{1, 0 ≤ t < 1,

0, otherwise.f2(t) =

{1, 1 ≤ t < 2,

0, otherwise.

f3(t) =

{1, 2 ≤ t < 3,

0, otherwise.f4(t) =

{1, 3 ≤ t < 4,

0, otherwise.

24

Page 25: תרגילים היילרן

In matrix notation, the four waveforms can be represented ass1(t)s2(t)s3(t)s4(t)

=

2 −1 −1 −1−2 1 1 01 −1 1 −11 −2 −2 2

f1(t)f2(t)f3(t)f4(t)

Note that the rank of the transformation matrix is 4 and therefore, the dimensionality of thewaveforms is 4.

(b) The representation vectors are

s1 =[2 −1 −1 −1

]s2 =

[−2 1 1 0

]s3 =

[1 −1 1 −1

]s4 =

[1 −2 −2 2

](c) The distance between the first and the second vector is:

d1,2 =

√|s1 − s2|2 =

√∣∣[4 −2 −2 −1]∣∣2 =

√25

Similarly we find that:

d1,3 =

√|s1 − s3|2 =

√∣∣[1 0 −2 0]∣∣2 =

√5

d1,4 =

√|s1 − s4|2 =

√∣∣[1 1 1 −3]∣∣2 =

√12

d2,3 =

√|s2 − s3|2 =

√∣∣[−3 2 0 1]∣∣2 =

√14

d2,4 =

√|s2 − s4|2 =

√∣∣[−3 3 3 −2]∣∣2 =

√31

d3,4 =

√|s3 − s4|2 =

√∣∣[0 1 3 −3]∣∣2 =

√19

Thus, the minimum distance between any pair of vectors is dmin =√

5.

4. [2, Problem 5.4].

(a) Using Gram-Schmidt orthogonalization procedure, find a set of orthonormal basis functionsto represent the following signals

s1(t) =

{2, 0 ≤ t < 1,

0, otherwise.s2(t) =

{−4, 0 ≤ t < 2,

0, otherwise.s3(t) =

{3, 0 ≤ t < 3,

0, otherwise.

(b) Express each of the signals si(t), i = 1, 2, 3 in terms of the basis functions found in (4a).

Solution:

(a) The energy of s1(t) and the first basis are

E1 =

∫ 1

0

|s1(t)|2 dt =

∫ 1

0

22dt = 4

⇒ φ1(t) =s1(t)√E1

=

{1, 0 ≤ t < 1,

0, otherwise.

25

Page 26: תרגילים היילרן

Define

s21 =

∫ T

0

s2(t)φ1(t)dt =

∫ 1

0

−4 · 1dt = 4

g2(t) = s2(t)− s21φ1(t) =

{−4, 1 ≤ t < 2,

0, otherwise.

Hence, the second basis function is

φ2(t) =g2(t)√∫ T0g2

2(t)dt=

{−1, 1 ≤ t < 2,

0, otherwise.

Define

s31 =

∫ T

0

s3(t)φ1(t)dt =

∫ 1

0

3 · 1dt = 3

s32 =

∫ 2T

T

s3(t)φ2(t)dt =

∫ 2

1

3 · −1dt = −3

g3(t) = s3(t)− s31φ1(t)− s32φ2(t) =

{3, 2 ≤ t < 3,

0, otherwise.

Hence, the third basis function is

φ3(t) =g3(t)√∫ T0g2

3(t)dt=

{1, 2 ≤ t < 3,

0, otherwise.

(b)

s1(t) = 2φ1(t)

s2(t) = −4φ1(t) + 4φ2(t)

s3(t) = 3φ1(t)− 3φ2(t) + 3φ3(t)

5. Optimum receiver.Suppose one of M equiprobable signals xi(t), i = 0, . . . ,M−1 is to be transmitted during a periodof time T over an AWGN channel. Moreover, each signal is identical to all others in the subinterval[t1, t2] where 0 < t1 < t2 < T .

(a) Show that the optimum receiver may ignore the subinterval [t1, t2].

(b) Equivalently, show that if x0, . . . ,xM−1 all have the same projection in one dimension6, thenthis dimension may be ignored.

(c) Does this result necessarily hold true if the noise is Gaussian but not white? Explain.

Solution:

(a) The data signals xi(t) being equiprobable, the optimum decision rule is the Maximum Like-

lihood (ML) rule, given by, (in vector form) mini |y − xi|2. From the invariance of the innerproduct, the ML rule is equivalent to,

mini

∫ T

0

|y(t)− xi(t)|2 dt

6xTi =[xi1 xi2 . . . xiN

]are vectors of length N , ∃k : xik = xjk, ∀i, j ∈ {0, . . . ,M − 1}.

26

Page 27: תרגילים היילרן

The integral is then written as a sum of three integrals,∫ T

0

|y(t)− xi(t)|2 dt =

∫ t1

0

|y(t)− xi(t)|2 dt+

∫ t2

t1

|y(t)− xi(t)|2 dt+

∫ T

t2

|y(t)− xi(t)|2 dt

Since the second integral over the interval [t1, t2] is constant as a function of i, the optimumdecision rule reduces to,

mini

{∫ t1

0

|y(t)− xi(t)|2 dt+

∫ T

t2

|y(t)− xi(t)|2 dt}

And therefore, the optimum receiver may ignore the interval [t1, t2].

(b) In an appropriate orthonormal basis of dimension N ≤ M , the vectors xi and y are givenby,

xTi =[xi1 xi2 . . . xiN

]yT =

[y1 y2 . . . yN

]Assume that xim = x1m for all i, the optimum decision rule becomes,

mini

M∑k=1

|yk − xik|2 ⇔ mini

M∑k=1,k 6=m

|yk − xik|2 + |ym − xim|2

Since |ym − xim|2 is constant for all i, the optimum decision rule becomes,

mini

M∑k=1,k 6=m

|yk − xik|2

Therefore, the projection xm might be ignored by the optimum receiver.

(c) The result does not hold true if the noise is colored Gaussian noise. This is due to the factthat the noise along one component is correlated with other components and hence mightnot be irrelevant. In such a case, all components turn out to be relevant. Equivalently, byduality, the same result holds in the time domain.

6. Let three orthonormal waveforms be defined as

ψ1(t) =

{√3, 0 ≤ t < 1

3 ,

0, otherwiseψ2(t) =

{√3, 1

3 ≤ t <23 ,

0, otherwiseψ3(t) =

{√3, 2

3 ≤ t < 1,

0, otherwise,

and consider the three signal waveforms

s1(t) = ψ1(t) +3

4ψ2(t) +

√3

4ψ3(t)

s2(t) = −ψ1(t) +3

4ψ2(t) +

√3

4ψ3(t)

s3(t) = −3

4ψ2(t)−

√3

4ψ3(t).

Assume that these signals are used to transmit equiprobable symbols over an AWGN channel withnoise spectral density N0

2 .

27

Page 28: תרגילים היילרן

(a) Show that optimal decisions (minimum probability of symbol error) can be obtained via theoutputs of two correlators (or sampled matched filters) and specify the waveforms used inthese correlators (or the impulse responses of the filters).

(b) Assume that p(e) is the resulting probability of symbol error when optimal demodulationand detection is employed. Show that

Q

(√2

N0

)< p(e) < 2Q

(√2

N0

).

Solution:

The three signals can be expressed in terms of two orthonormal basis waveforms φ1(t) and φ2(t).These can be chosen, e.g., as

φ1(t) = ψ1(t), φ2(t) =

√3

2ψ2(t) +

1

2φ3(t).

The above choice gives

s1(t) = φ1(t) +

√3

2φ2(t), s2(t) = −φ1(t) +

√3

2φ2(t), s3(t) = −

√3

2φ2(t),

corresponding to the vector representation

s1 = (1,

√3

2), s2 = (−1,

√3

2), s3 = (0,−

√3

2),

that is, the three corners of an equilateral triangle of side-length 2.

(a) Use the two basis waveforms derived above to implement a correlation receiver.

(b) Since the three signals are at pairwise distance 2, and the transmitted signal are equiprobable,the following holds

Pr {error|s1(t)} = Pr {error|s2(t)} = Pr {error|s3(t)} .

The union bound gives an upper bound by including, for each point, the other two:

Pr {error|s1(t)} = Pr {detect s2(t) or s3(t)|s1(t)}≤ Pr {detect s2(t)|s1(t)}+ Pr {detect s3(t)|s1(t)}

= 2Q

(√2

N0

).

The lower bound is obtained by only counting one neighbor:

Pr {error|s1(t)} ≥ Pr {detect s2(t)|s1(t)} = Q

(√2

N0

).

28

Page 29: תרגילים היילרן

6 Optimal Receiver for the Waveform Channel

1. [1, Problem 5.4].A binary digital communication system employs the signals

s0(t) =

{0, 0 ≤ t < T,

0, otherwise.s1(t) =

{A, 0 ≤ t < T,

0, otherwise.

for transmitting the information. This is called on-off signaling. The received signal, r(t) obeys

r(t) = si(t) + n(t), i = 0, 1,

where n(t) is a zero-mean AWGN with variance σ2n. The demodulator cross-correlates the received

signal r(t) with si(t), i = 0, 1 and samples the output of the correlator at t = T .

(a) Determine the optimum detector for an AWGN channel and the optimum threshold, assumingthat the signals are equally probable.

(b) Determine the probability of error as a function of the SNR. How does on-off signallingcompare with antipodal signaling?

Solution:

(a) The correlation type demodulator employs a filter:

f(t) =

{1√T, 0 ≤ t < T,

0, otherwise.

Hence, the sampled outputs of the cross-correlators are:

r = si + n, i = 0, 1

where s0 = 0, s1 = A√T and the noise term n is a zero-mean Gaussian random variable with

variance σ2n = N0

2 . The probability density function for the sampled output is:

f(r|s0) =1√πN0

e−r2

N0 f(r|s1) =1√πN0

e−(r−A

√T )2

N0

The signal power is 12 · 0 + 1

2 · A2T = 1

2 · A2T . The noise power is N0

2 . Therefore, the SNRfor on-off signaling is

SNROn-Off =12 ·A

2TN0

2

=A2T

N0.

The minimum error decision rule is:

f(r|s1)

f(r|s0)

s1

≷s0

1

⇒ rs1

≷s0

1

2A√T

29

Page 30: תרגילים היילרן

(b) The average probability of error is:

p(e) =1

2

∫ ∞12A√T

f(r|s0)dr +1

2

∫ 12A√T

−∞f(r|s1)dr

=1

2

∫ ∞12A√T

1√πN0

e−r2

N0 dr +1

2

∫ 12A√T

−∞

1√πN0

e−(r−A

√T )2

N0 dr

=1

2

∫ ∞12

√2N0A√T

1√2πe−

x2

2 dx+1

2

∫ − 12

√2N0A√T

−∞

1√2πe−

x2

2 dx

= Q

(1

2

√2

N0A√T

)= Q

(√1

2SNROn-Off

)

Thus, the on-off signaling requires a factor of two more energy to achieve the same probabilityof error as the antipodal signaling.

2. [2, Problem 5.11].Consider the optimal detection of the sinusoidal signal

s(t) = sin

(8πt

T

), 0 ≤ t ≤ T

in additive white Gaussian noise.

(a) Determine the correlator output (at t = T ) assuming a noiseless input.

(b) Determine the corresponding match filter output, assuming that the filter includes a delayT to make it casual.

(c) Hence show that these two outputs are the same at time instant t = T .

Solution:

For the noiseless case, the received signal r(t) = s(t), 0 ≤ t ≤ T .

(a) The correlator output is:

y(T ) =

∫ T

0

r(τ)s(τ)dτ =

∫ T

0

s2(τ)dτ =

∫ T

0

sin2

(8πτ

T

)dτ =

T

2

(b) The matched filter is defined by the impulse response h(t) = s(T − t). The matched filteroutput is therefore:

y(t) =

∫ ∞−∞

r(λ)h(t− λ)dλ =

∫ ∞−∞

s(λ)s(T − t+ λ)dλ

=

∫ T

0

sin

(8πλ

T

)sin

(8π(T − t+ λ)

T

)dλ

=1

2

∫ T

0

cos

(8π(T − t)

T

)dλ− 1

2

∫ T

0

cos

(8π(T − t+ λ)

T

)dλ

=T

2cos

(8π(t− T )

T

)− T

16πsin

(8π(T − t)

T

)− T

16πsin

(8πt

T

).

30

Page 31: תרגילים היילרן

(c) When the matched filter output is sampled at t = T , we get

y(T ) =T

2

which is exactly the same as the correlator output determined in item (2a).

3. SNR Maximization with a Matched Filter.Prove the following theorem:For the real system shown in Figure 4, the filter h(t) that maximizes the signal-to-noise ratio atsample time Ts is given by the matched filter h(t) = x(Ts − t).

Figure 4: SNR maximization by matched filter.

solution:

Compute the SNR at sample time t = Ts as follows:

Signal Energy = [x(t) ∗ h(t)|t=Ts ]2

=

[ ∫ ∞−∞

x(t)h(Ts − t)dt]2

= [〈x(t), h(Ts − t)〉]2

The sampled noise at the matched filter output has energy or mean-square

Noise Energy = E

{∫ ∞−∞

n(t)h(Ts − t)dt∫ ∞−∞

n(s)h(Ts − s)ds}

=

∫ ∞−∞

∫ ∞−∞

N0

2δ(t− s)h(Ts − t)h(Ts − s)dtds

=N0

2

∫ ∞−∞

h2(Ts − t)dt

=N0

2‖h‖2

The signal-to-noise ratio, defined as the ratio of the signal power in to the noise power, equals

SNR =2

N0

[〈x(t), h(Ts − t)〉]2

‖h‖2

The Cauchy-Schwarz Inequality states that

[〈x(t), h(Ts − t)〉]2 ≤ ‖x‖2 ‖h‖2

with equality if and only if x(t) = kh(Ts − t) where k is some arbitrary constant. Thus, byinspection, the SNR is maximized over all choices for h(t) when h(t) = x(Ts − t). The filter h(t)is matched to x(t), and the corresponding maximum SNR (for any k) is

SNRmax =2

N0‖x‖2

31

Page 32: תרגילים היילרן

4. The optimal receiver.Consider the signals s0(t), s1(t) with the respective probabilities p0, p1.

s0(t) =

ET , 0 ≤ t < aT,

−√

ET , aT ≤ t < T,

0, otherwise.

s1(t) =

{√2ET cos

(2πtT

), 0 ≤ t < T,

0, otherwise.

The observation, r(t), obeys

r(t) = si(t) + n(t), i = 0, 1

E{n(t)n(τ)} =N0

2δ(t− τ), n(t) ∼ N(0,

N0

2δ(t− τ)).

(a) Find the optimal receiver for the above two signals, write the solution in terms of s0(t) ands1(t).

(b) Find the error probability of the optimal receiver for equiprobable signals.

(c) Find the parameter a, which minimizes the error probability.

Solution:

(a) We will use a type II, which uses filters matched to the signals si(t), i = 0, 1. The optimalreceiver is depicted in Figure 5.

Figure 5: Optimal receiver - II.

where h0(t) = s0(T − t), h1(t) = s1(T − t).

The Max block in Figure 5 can be implemented as follows

y = y0 − y1

s0(t)≷s1(t)

0

The R.V y obeys

y = [h0(t) ∗ r(t)]∣∣t=T

+N0

2ln p0 −

E

2− [h1(t) ∗ r(t)]

∣∣t=T− N0

2ln p1 +

E

2

=N0

2lnp0

p1+ [(h0(t)− h1(t)) ∗ r(t)]

∣∣t=T

Hence the optimal receiver can be implemented using one convolution operation instead oftwo convolution operations, as depicted in Figure 6.

32

Page 33: תרגילים היילרן

Figure 6: Optimal receiver - II.

(b) For an equiprobable binary constellation, in an AWGN channel, the probability of error isgiven by

p(e) = Q

(d/2

σ

), d = ‖s0 − s1‖

d2 = ‖s0 − s1‖2 = ‖s0‖2 + ‖s1‖2 − 2 〈s0, s1〉

where σ2 is the noise variance.

The correlation coefficient between the two signals, ρ, equals

ρ =〈s0, s1〉‖s0‖ ‖s1‖

=〈s0, s1〉E

and for equal energy signals

d2 = 2E − 2 〈s0, s1〉⇒ d =

√2E(1− ρ)

⇒ p(e) = Q

(√E(1− ρ)

N0

)(c) ρ is the only parameter, in p(e), affected by a. An explicit calculation of ρ yields

〈s0, s1〉 =

∫ T

0

s0(t)s1(t)dt

=

∫ aT

0

√E

T

√2E

Tcos

2πt

Tdt−

∫ T

aT

√E

E

√2E

Ecos

2πt

Tdt

=√

2E

2πsin 2πa+

√2E

2πsin 2πa

⇒ ρ =

√2

πsin 2πa

⇒ p(e) = Q

(√E(1−

√2π sin 2πa)

N0

)In order to minimize the probability of error, we will maximize the Q function argument:

sin 2πa = −1

⇒ a =3

4

33

Page 34: תרגילים היילרן

5. The optimal receiver - II.Consider the following equiprobable signals si(t), i = 0, 1, 2, 3.

s0(t) =

{√2ET cos

(2πtT

), 0 ≤ t < T,

0, otherwise.

s1(t) =

{−√

2ET cos

(2πtT

), 0 ≤ t < T,

0, otherwise.

s2(t) =

ET , 0 ≤ t < T

2 ,

−√

ET ,

T2 ≤ t < T,

0, otherwise.

s3(t) =

−√

ET , 0 ≤ t < T

2 ,√ET ,

T2 ≤ t < T,

0, otherwise.

The observation, r(t), obeys

r(t) = si(t) + n(t), i = 0, 1, 2, 3

E{n(t)n(τ)} =N0

2δ(t− τ), n(t) ∼ N(0,

N0

2δ(t− τ)).

(a) Find a signal space representation for the signals si(t), i = 0, 1, 2, 3, and draw the optimaldecision regions.

(b) Find the optimal receiver for the above four signals which comprises of at most two filters(or two multipliers and integrators).

(c) Find the error probability of the optimal receiver.

Solution:

(a) The following two orthonormal functions comprise a basis to the signal space

φ0(t) =1√Es0(t), φ1(t) =

1√Es2(t).

It is easy to verify that 〈φ0(t), φ1(t)〉 = 0, and that {φ0(t), φ1(t)} spans the signal space.Figure 7 depicts the signal space spanned by {φ0(t), φ1(t)} and the optimal decision regions.

Figure 7: Decision regions for the optimal receiver.

34

Page 35: תרגילים היילרן

Figure 8: Optimal receiver.

(b) Note that the signals si(t), i = 0, 1, 2, 3, are equiprobable and with equal energy. The optimalreceiver is depicted in Figure 8.

The decision block depicted in Figure 8 implements the following equation

s(t) =

s0(t), y1 > |y2| ,s1(t), −y1 > |y2| ,s2(t), y2 > |y1| ,s3(t), −y2 > |y1| .

(c) Let Q be defined as follows

Q , Q

( √2E√2N0

)= Q

( √E√N0

).

Let p(c) = (1−Q)2 denote the probability of a correct decision. Then, the error probabilityis

p(e) = 1− p(c) = 1− (1−Q)2 = 2Q−Q2.

35

Page 36: תרגילים היילרן

7 The Probability of Error

1. [1, Problem 5.10].A ternary communication system transmits one of three signals, s(t), 0, or −s(t), every T seconds.The received signal is one either r(t) = s(t) + z(t), r(t) = z(t) or r(t) = −s(t) + z(t), where z(t) iswhite Gaussian noise with E{z(t)} = 0 and Φzz(τ) = 1

2E{z(t)z∗(τ)} = N0δ(t−τ) . The optimum

receiver computes the correlation metric

U = Re

{∫ T

0

r(t)s∗(t)dt

}and compares U with a threshold A and a threshold −A. If U > A the decision is made that s(t)was sent. If U < A, the decision is made in favor of −s(t). If −A ≤ U ≤ A, the decision is madein favor of 0.

(a) Determine the three conditional probabilities of error p(e|s(t)), p(e|0)) and p(e| − s(t)).(b) Determine the average probability of error p(e) as a function of the threshold A, assuming

that the three symbols are equally probable a priori.

(c) Determine the value of A that minimizes p(e).

Solution:

(a) U = Re

{∫ T0r(t)s(t)dt

}, where r(t) =

s(t) + z(t)−s(t) + z(t)

z(t)

depending on which signal was

sent. If we assume that s(t) was sent:

U = Re

{∫ T

0

s(t)s∗(t)dt

}+ Re

{∫ T

0

z(t)s∗(t)dt

}= 2E +N

where E = 12

∫ T0s(t)s∗(t)dt is a constant, and N = Re

{∫ T0z(t)s∗(t)dt

}is a Gaussian

random variable with zero mean and variance 2EN0. Hence, given that s(t) was sent, theprobability of error is:

p1(e) = Pr{N < A− 2E} = Q

(2E −A√

2EN0

)When −s(t) is transmitted: U = −2E +N , and the corresponding conditional error proba-bility is:

p2(e) = Pr{N > −A+ 2E} = Q

(2E −A√

2EN0

)and finally, when 0 is transmitted: U = N , and the corresponding error probability is:

p3(e) = Pr{N > A or N < −A} = 2Q

(A√

2EN0

)(b)

p(e) =1

3[p1(e) + p2(e) + p3(e)] =

2

3

[Q

(2E −A√

2EN0

)+Q

(A√

2EN0

)]

36

Page 37: תרגילים היילרן

(c) In order to minimize p(e):dp(e)

dA= 0⇒ A = E

where we differentiate Q(x) =∫∞x

1√2πe−

t2

2 dt with respect to x, using the Leibnitz rule:ddx

( ∫∞f(x)

g(a)da)

= − dfdxg(f(x)).

Using this threshold:

p(e) =4

3Q

(√E

2N0

)2. [1, Problem 5.19].

Consider a signal detector with an input

r = ±A+ n, A > 0

where +A and −A occur with equal probability and the noise variable n is characterized by theLaplacian p.d.f:

f(n) =1√2σe−√

2|n|σ

(a) Determine the probability of error as a function of the parameters A and σ.

(b) Determine the SNR required to achieve an error probability of 10−5. How does the SNRcompare with the result for Gaussian p.d.f?

Solution:

(a) Let λ =√

2σ . The optimal receiver uses the criterion:

f(r|A)

f(r| −A)= e−λ[|r−A|−|r+A|]

A≷−A

1

⇒ rA≷−A

0

The average probability of error is:

p(e) =1

2Pr{Error|A}+

1

2Pr{Error| −A}

=1

2

∫ 0

−∞f(r|A)dr +

1

2

∫ ∞0

f(r| −A)dr

=1

2

∫ 0

−∞

λ

2e−λ|r−A|dr +

1

2

∫ ∞0

λ

2e−λ|r+A|dr

4

∫ −A−∞

e−λ|x|dx+λ

4

∫ ∞A

e−λ|x|dx

=1

2e−λA =

1

2e−√

2Aσ

(b) The variance of the noise is σ2, hence the SNR is:

SNR =A2

σ2

37

Page 38: תרגילים היילרן

and the probability of error is given by:

p(e) =1

2e−√

2SNR

For p(e) = 10−5 we obtain:

ln(2 · 10−5) = −√

2SNR⇒ SNR = 17.674 dB

If the noise was Gaussian, then the probability of error for antipodal signalling is:

p(e) = Q

(√SNR

)where SNR is the signal to noise ratio at the output of the matched filter. With p(e) = 10−5

we find√

SNR = 4.26 and therefore SNR = 12.594 dB. Thus the required signal to noiseratio is 5 dB less when the additive noise is Gaussian.

3. [1, Problem 5.38].The discrete sequence

rk =√Ebck + nk, k = 1, 2, . . . , n

represents the output sequence of samples from a demodulator, where ck = ±1 are elements ofone of two possible code words, C1 = [1 1 . . . 1] and C2 = [1 1 . . . 1 −1 . . . −1]. The code wordC2 has w elements that are +1 and n − w elements that are −1, where w is a positive integer.The noise sequence {nk} is white Gaussian with variance σ2.

(a) What is the optimum ML detector for the two possible transmitted signals?

(b) Determine the probability of error as a function of the parameters σ2, Eb, w.

(c) What is the value of w that minimizes the the error probability?

Solution:

(a) The optimal ML detector selects the sequence Ci that minimizes the quantity:

D(r,Ci) =

n∑k=1

(rk −√Ebcik)2

The metrics of the two possible transmitted sequences are

D(r,C1) =

w∑k=1

(rk −√Eb)

2 +

n∑k=w+1

(rk −√Eb)

2

D(r,C2) =

w∑k=1

(rk −√Eb)

2 +

n∑k=w+1

(rk +√Eb)

2

Since the first term of the right side is common for the two equations, we conclude that theoptimal ML detector can base its decisions only on the last n − w received elements of r.That is

w∑k=w+1

(rk −√Eb)

2 −w∑

k=w+1

(rk +√Eb)

2C2

≷C1

0

or equivalentlyw∑

k=w+1

rk

C1

≷C2

0

38

Page 39: תרגילים היילרן

(b) Since rk =√Ebcik + nk the probability of error Pr{Error|C1} is

Pr{Error|C1} = Pr

{√Eb(n− w) +

n∑k=w+1

nk < 0

}

= Pr

{ n∑k=w+1

nk < −(n− w)√Eb

}

The R.V u =∑nk=w+1 nk is zero-mean Gaussian with variance σ2

u = (n− w)σ2. Hence

Pr{Error|C1} = p1(e) =1√

2πσ2u

∫ −(n−w)√Eb

−∞exp

(− x2

σ2u

)dx = Q

(√Eb(n− w)

σ2

)Similarly we find that Pr{Error|C1} = Pr{Error|C2} and since the two sequences areequiprobable

p(e) = Q

(√Eb(n− w)

σ2

)(c) The probability of error p(e) is minimized when Eb(n−w)

σ2 is maximized, that is for w = 0. Thisimplies that C1 = −C2 and thus the distance between the two sequences is the maximumpossible.

4. Sub optimal receiver.Consider a binary system transmitting the signals s0(t), s1(t) with equal probability.

s0(t) =

{√2ET sin 2πt

T , 0 ≤ t ≤ T,0, otherwise.

s1(t) =

{√2ET cos 2πt

T , 0 ≤ t ≤ T,0, otherwise.

The observation, r(t), obeys

r(t) = si(t) + n(t), i = 0, 1

where n(t) is white Gaussian noise with E{n(t)} = 0 and E{n(t)n(τ)} = N0

2 δ(t− τ).

(a) Sketch an optimal and efficient (in the sense of minimal number of filters) receiver. What isthe error probability when this receiver is used?

(b) What is the error probability of the following receiver?

∫ T2

0

r(t)dts0

≷s1

0

(c) Consider the following receiver

∫ aT

0

r(t)dts0

≷s1

K, 0 ≤ a ≤ 1

where K is the optimal threshold for∫ aT

0r(t)dt. Find a which minimizes the probability of

error. Numerical solution may be used.

39

Page 40: תרגילים היילרן

Figure 9: Optimal receiver type II.

Solution:

(a) The signals are equiprobable and have equal energy. We will use type II receiver, depictedin Figure 9.

The distance between the signals is

d2 =

∫ T

0

2E

T

(sin

(2πt

T

)− cos

(2πt

T

))2

= 2E ⇒ d =√

2E

The receiver depicted in Figure 9 is equivalent to the the following (and more efficient)receiver, depicted in Figure 10.

Figure 10: Efficient optimal receiver.

For a binary system with equiprobable signals s0(t) and s1(t) the probability of error is givenby

p(e) = Q

(d

)= Q

d

2√

N0

2

= Q

(d√2N0

)where d, the distance between the signals, is given by

d = ‖s0(t)− s1(t)‖ = ‖s0 − s1‖

Hence, the probability of error is

p(e) = Q

(d√2N0

)⇒ p(e) = Q

(√E

N0

)(b) Let us define the random variable, Y =

∫ T2

0r(t)dt. Y obeys

Y |s0 =

∫ T2

0

s0(t)dt+

∫ T2

0

n(t)dt

Y |s1 =

∫ T2

0

s1(t)dt+

∫ T2

0

n(t)dt

Let us define the random variable N =∫ T

2

0n(t)dt. N is a zero mean Gaussian random

variable, and variance

Var{N} = E

{∫ T2

0

∫ T2

0

n(τ)n(λ)dτdλ

}=

∫ T2

0

∫ T2

0

N0

2δ(τ − λ)dτdλ =

NoT

4

40

Page 41: תרגילים היילרן

Y |si is a Gaussian random variable (note that Y is not gaussian, but a Gaussin Mixture!)with mean:

E{Y |s0} =

∫ T2

0

s0(t)dt =

√2ET

π

E{Y |s1} =

∫ T2

0

s1(t)dt = 0

The variance of Y |si is identical under both cases, and equal to the variance of N . For thegiven decision rule the error probability is:

p(e) = p(s0) Pr{Y < 0|s0}+ p(s1) Pr{Y > 0|s1}

=1

2Q

(2

π

√2E

N0

)+

1

4

(c) We will use the same derivation procedure as in the previous item.Define the random variables Y,N as follows:

Y =

∫ aT

0

r(t)dt, N =

∫ aT

0

n(t)dt

E{N} = 0, Var{N} =aTN0

2

E{Y |s0} =

√2E

T

∫ aT

0

s0(t)dt =

√2ET

2π(1− cos 2πa)

E{Y |s1} =

√2E

T

∫ aT

0

s1(t)dt =

√2ET

2πsin 2πa

Var{Y |s0} = Var{Y |s1} = Var{N}

The distance between Y |s0 and Y |s1 equals

d =

∣∣∣∣∣√

2ET

2π(1− cos(2πa)− sin(2πa))

∣∣∣∣∣For an optimal decision rule the probability of error equals Q

(d

). Hence the probability of

error equals

p(e) = Q

(1

√E

N0

1√a|(1− cos(2πa)− sin(2πa))|

)which is minimized when 1√

a|(1− cos 2πa− sin 2πa)| is maximized.

Let aopt denote the a which maximizes the above expression. Numerical solution yields that

aopt ∼= 0.5885

5. Non-matched receiver.Consider binary signaling with equiprobable waveforms

s0(t) = 0, 0 ≤ t ≤ T

s1(t) =

√2E

Tsin

πt

T, 0 ≤ t ≤ T

41

Page 42: תרגילים היילרן

in an AWGN channel with spectral density N0

2 . The receiver for this problem is implemented asa filter with the impulse response

h(t) =

{e−at, t ≥ 0,

0, otherwise., a ≥ 0.

More precisely, letting yT denote the value of the output of the filter sampled at t = T , when fedby the received signal, the decision is

yT < K ⇒ s = s0,

yT ≥ K ⇒ s = s1,

where K > 0 is a decision threshold. Assume that only one symbol is transmitted and answer thefollowing questions.

(a) Determine the resulting error probability p(e).

(b) Which value for the threshold b minimizes p(e)?

(c) With the optimal value for b (from the previous item), which value for the filter parameter,a, minimizes p(e)? Numerical solution may be used.

Solution:

(a) The decision variable can be expressed as

yT = s+ w,

where s is either zero, corresponding to s0(t), or

s =

∫ ∞−∞

s1(τ)h(T − τ)dτ

=

∫ T

0

√2E

Tsin

πτ

T· e−a(T−τ)dτ

=π√

2ET(1 + e−aT

)a2T 2 + π2

,

corresponding to s1(t), and where W is zero mean Gaussian. The variance of W is

Var{W} =N0

2

∫ ∞0

h2(t)dt =N0

4a.

We conclude that the conditional distributions of YT are

Y |s0(t) ∼ N(0,N0

4a), Y |s1(t) ∼ N(

π√

2ET(1 + e−aT

)a2T 2 + π2

,N0

4a).

The error probability is

p(e) =1

2(Pr (YT > K|s0(t)) + Pr (YT < K|s1(t)))

=1

2Q

√4aK2

N0

+1

2Q

(√4a

N0

(π√

2ET(1 + e−aT

)a2T 2 + π2

−K

)).

42

Page 43: תרגילים היילרן

(b) As this is a binary decision problem in an AWGN channel, the probability of error, p(e), isminimized when the decision threshold K is located in half-way between the two alternativesfor s, corresponding to

K =1

2

π√

2ET(1 + e−aT

)a2T 2 + π2

,

giving

p(e) = Q

(√2ETa

N0

π(1 + e−aT

)a2T 2 + π2

).

(c) Setting a to minimize p(e) corresponds to maximizing

√x (1 + e−x)

x2 + π2,

with respect to x = aT . The maximum is at x ≈ 1.1173, hence the optimal a is a ≈ 1.1173T .

43

Page 44: תרגילים היילרן

8 Bit Error Probability

1. [3, Example 6.2].Compare the probability of bit error for 8PSK and 16PSK, in an AWGN channel, assumingγb = 15dB = Eb

N0and equal a-priori probabilities. Use the following approximations:

• Nearest neighbor approximation given in class.

• γb ≈ γslog2M

.

• The approximation for Pe,bit given in class.

Solution:

The nearest neighbor approximation for the probability of error, in an AWGN channel, for anM-PSK constellation is

Pe ≈ 2Q(√

2γs sin(π

M)).

The approximation for Pe,bit (under Gray mapping at high enough SNR) is

Pe,bit ≈Pe

log2M.

For 8PSK we have γs = (log2 8) · 1015/10 = 94.87. Hence

Pe ≈ 2Q(√

189.74 sin(π/8))

= 1.355 · 10−7.

Using the approximation for Pe,bit we get

Pe,bit =Pe3

= 4.52 · 10−8.

For 16PSK we have γs = (log2 16) · 1015/10 = 126.49. Hence

Pe ≈ 2Q(√

252.98 sin(π/16))

= 1.916 · 10−3.

Using the approximation for Pe,bit we get

Pe,bit =Pe4

= 4.79 · 10−4.

Note that Pe,bit is much larger for 16PSK than for 8PSK for the same γb. This result is expected,since 16PSK packs more bits per symbol into a given constellation, so for a fixed energy-per-bitthe minimum distance between constellation points will be smaller.

2. Bit error probability for rectangular constellation.Let p0(t) and p1(t) be two orthonormal functions, different from zero in the time interval [0, T ].The equiprobable signals defined in Figure 11 are transmitted through a zero-mean AWGN channelwith noise PSD equals N0/2.

(a) Calculate Pe for the optimal receiver.

(b) Calculate Pe,bit for the optimal receiver (optimal in the sense of minimal Pe).

(c) Approximate Pe,bit for high SNR (d2 �√

N0

2 ). Explain.

44

Page 45: תרגילים היילרן

Figure 11: Eight signals in rectangular constellation.

Solution:

Let n0 denote the noise projection on p0(t) and n1 the noise projection on p1(t). Clearly ni ∼N(0, N0/2), i = 0, 1.

(a) Let Pc denote the probability for correct symbol decision; hence Pe = 1− Pc.

Pr{correct decision |(000) was transmitted} =

(1−Q

(d/2√N0/2

))2

(a)= Pr{correct decision |(100) was transmitted}(b)= Pr{correct decision |(010) was transmitted}(c)= Pr{correct decision |(110) was transmitted}= P1.

where (a), (b) and (c) are due to the constellation symmetry.

Pr{correct decision |(001) was transmitted} =

(1−Q

(d/2√N0/2

))(1− 2Q

(d/2√N0/2

))(a)= Pr{correct decision |(101) was transmitted}(b)= Pr{correct decision |(011) was transmitted}(c)= Pr{correct decision |(111) was transmitted}= P2.

where (a), (b) and (c) are due to the constellation symmetry.

45

Page 46: תרגילים היילרן

Hence

Pc =1

2

((1−Q

(d/2√N0/2

))2

+

(1−Q

(d/2√N0/2

))(1− 2Q

(d/2√N0/2

)))Pe = 1− Pc

⇒ Pe =1

2

(5Q

(d/2√N0/2

)− 3Q

(d/2√N0/2

)2).

(b) Let b0 denote the MSB, b2 denote the LSB and b1 denote the middle bit7. Let bi(s), i = 0, 1, 2denote the ith bit of the constellation point s.

Pr{error in b2|(000) was transmitted} =∑

s:b2(s)6=0

Pr{s was received|(000) was transmitted}

= Pr

{− 5d

2< N0 < −

d

2

}= Pr

{d

2< N0 <

5d

2

}= Q

(d/2√N0/2

)−Q

(5d/2√N0/2

)(a)= Pr{error in b2|(100) was transmitted}(b)= Pr{error in b2|(010) was transmitted}(c)= Pr{error in b2|(110) was transmitted}= P1.

where (a), (b) and (c) are due to the constellation symmetry.

Pr{error in b2|(001) was transmitted} =∑

s:b2(s) 6=1

Pr{s was received|(001) was transmitted}

= Pr

{N0 < −

3d

2

}+ Pr

{d

2< N0

}= Q

(d/2√N0/2

)+Q

(3d/2√N0/2

)= Pr{error in b2|(101) was transmitted}= Pr{error in b2|(011) was transmitted}= Pr{error in b2|(111) was transmitted}= P2.

7For the top left constellation point in Figure 11 (b0, b1, b2) = (010).

46

Page 47: תרגילים היילרן

Using similar arguments we can calculate the bit error probability for b1

Pr{error in b1|(000) was transmitted} = Q

(3d/2√N0/2

)= Pr{error in b1|(100) was transmitted}= Pr{error in b1|(010) was transmitted}= Pr{error in b1|(110) was transmitted}= P3.

Pr{error in b1|(001) was transmitted} = Q

(d/2√N0/2

)= Pr{error in b1|(101) was transmitted}= Pr{error in b1|(011) was transmitted}= Pr{error in b1|(111) was transmitted}= P4.

The bit error probability for b0 equals

Pr{error in b0|(000) was transmitted} = Q

(d/2√N0/2

)= P5.

Due to the constellation symmetry and the bits mapping, the bit error probability for b0 isequal for all the constellation points.

Let Pe,bi , i = 0, 1, 2 denote the averaged (over all signals) bit error probability of the ith bit,then

Pe,b0 = P5.

Pe,b1 =1

2(P3 + P4).

Pe,b2 =1

2(P1 + P2).

The averaged bit error probability, Pe,bit, is given by

Pe,bit =1

3

2∑i=0

Pe,bi

=5

6Q

(d/2√N0/2

)+

1

3Q

(3d/2√N0/2

)− 1

6Q

(5d/2√N0/2

)(c) For d

2 �√

N0

2

Pe,bit ∼=5

6Q

(d/2√N0/2

)Pe ∼=

5

2Q

(d/2√N0/2

)⇒ Pe,bit →

Pe3.

Note that Pelog2M

is the lower bound for Pe,bit.

47

Page 48: תרגילים היילרן

3. Octagon constellation.Consider the signal constellation depicted in Figure 12.

Figure 12: Octagonal constellation.

Each of the eight signal points carries three bits of information, with labeling as indicated inFigure 12. The bits are equiprobable and independent. The signals are transmitted over anAWGN channel with spectral density N0

2 and the receiver is the optimal (minimum symbol errorprobability) receiver.

(a) Sketch the decision regions of the optimal receiver.

(b) Let p(e) denote the resulting symbol error probability. Show that

Q

(d√2N0

)< p(e) < Q

(d√2N0

)+Q

(2d√2N0

).

(c) Determine an exact expression for the bit-error probability, in terms of the Q-function and

the ratio d2

N0.

Solution:

(a) As the noise is AWGN, the optimal decoding rule is minimum distance. The cprrespondingdecision regions of the optimal receiver are depicted in Figure 13.

(b) The upper bound is the union bound. Note that each signal point has two neighbors, one atdistance d and one at distance 2d, and that only these two neighbors need to be included inthe bound to make it valid.

The lower bound is the nearest neighbor term in the union bound. Since there are more thantwo signal points, including only this term will surely give a lower bound.

(c) Let b0b1b2 be the bits corresponding to one signal point, with b0 the MSB and b2 the LSB.Let si denote the ith signal point, with

i = b0 · 4 + b1 · 2 + b2.

48

Page 49: תרגילים היילרן

Figure 13: Decision regions for the optimal receiver. Ik is the region for deciding on sk, k = 0, 1, . . . , 7.

Let bm,m = 0, 1, 2, be the received bits after detection. The averaged bit error probability is

Pe,bit =1

3

3∑m=1

Pr(bm 6= bm

).

Let r = (r0, r1) be the received point. For b0, assume b0 = 0 and note that b0 = 1 if thereceived point is in the right half-plane, that is

Pr{b0 6= 0|b0 = 0

}= Pr {r0 > 0|si, i ∈ {0, 1, 2, 3} was transmitted}

=1

4Pr {r0 > 0|s0}+

1

4Pr {r0 > 0|s1}+

1

4Pr {r0 > 0|s2}+

1

4Pr {r0 > 0|s3} .

The distance between the constellation point s6 and the r1 axis is d2 . Referring to Figure 14,

observe that the distance between the constellation point s6 and the r0 axis is d2 +x = d

2 +√

2d.

Figure 14: Distance from the axis r0 and r1 for the constellation point s6. (2d)2 = 2x2 ⇒ x =√

2d.

49

Page 50: תרגילים היילרן

Therefore,

Pr{b0 6= 0|b0 = 0

}=

1

2Pr {r0 > 0|s0}+

1

2Pr {r0 > 0|s1}

=1

2Q

√ d2

2N0

+1

2Q

(2√

2 + 1)√ d2

2N0

= A0.

Due to symmetry, we get Pr{b0 6= 1|b0 = 1

}= Pr

{b0 6= 0|b0 = 0

}, hence

Pr{b0 6= b0

}= A0.

For b1 it is easy to see that we get Pr{b1 6= b1

}= Pr

{b0 6= b0

}, since assume b1 = 0, an

error occurs if r1 > 0, thus the situation is identical to the case of b0 (subject to 90◦ rotation).

Finally, let n = (n0, n1) be the noise terms in the received signal. Consider the rotated noiseterms, n = (n0, n1), as depicted in Figure 158.

Figure 15: Decision regions for the optimal receiver, and rotated noise terms.

Then assuming b2 = 0 we get

Pr{b2 6= 0|b2 = 0

}= Pr {si, i ∈ {1, 3, 5, 7} was received|sj , j ∈ {0, 2, 4, 6} was transmitted}

=1

4

(Pr{b2 = 1|s = s0}+ Pr{b2 = 1|s = s2}

+ Pr{b2 = 1|s = s4}+ Pr{b2 = 1|s = s6}).

Note that due to 90◦ symmetry

Pr{b2 = 1|s = s0} = Pr{b2 = 1|s = s2} = Pr{b2 = 1|s = s4} = Pr{b2 = 1|s = s6}.8Note that the vectors n and n have the same PDF as they are the projection of an AWGN on orthonormal basis.

50

Page 51: תרגילים היילרן

Thus,

Pr{b2 6= 0|b2 = 0

}= Pr

{b2 6= 0|s = s6

}= Pr {s ∈ I7 ∪ I5 ∪ I1 ∪ I3|s = s6}= Pr {s ∈ I7 ∪ I5|s = s6}+ Pr {s ∈ I1 ∪ I3|s = s6}

= Pr

{n0 > d, n1 <

(1 +√

2√2

)· d

}+ Pr

{n0 < d, n1 >

(1 +√

2√2

)· d

}

= Q

(d√N0/2

)(1−Q

(1 +√

2√2

d√N0/2

))+(

1−Q

(d√N0/2

))Q

(1 +√

2√2

d√N0/2

)

= Q

(d√N0/2

)+Q

(1 +√

2√2

d√N0/2

)−

2Q

(1 +√

2√2

d√N0/2

)Q

(d√N0/2

)= A2.

Note that due to the 90◦ symmetry of the problem Pr{b2 6= 1|b2 = 1

}= Pr

{b2 6= 0|b2 = 0

}.

Hence,

Pe,bit =2

3A0 +

1

3A2.

51

Page 52: תרגילים היילרן

9 Connection with the Concept of Capacity

1. [2, Problem 9.29].A voice-grade channel of the telephone network has a bandwidth of 3.4 KHz. Assume real-valuedsymbols.

(a) Calculate the capacity of the telephone channel for signal-to-noise ratio of 30 dB.

(b) Calculate the minimum signal-to-noise ratio required to support information transmission

through the telephone channel at the rate of 4800

[bitssec

].

Solution:

(a) The channel bandwidth is W = 3.4 KHz. The received signal-to-noise ratio is SNR = 103 =30 dB. Hence the channel capacity is

C = W log2(1 + SNR) = 3.4 · 103 · log2(1 + 103) = 33.9 · 103

[bits

sec

].

(b) The required SNR is the solution of the following equation

4800 = 3.4 · 103 · log2(1 + SNR)⇒ SNR = 1.6 = 2.2 dB.

2. [1, Problem 7.17].Channel C1 is an additive white Gaussian noise channel with a bandwidth W , average transmitterpower P , and noise PSD N0

2 . Channel C2 is an additive white Gaussian noise channel with thesame bandwidth and average power as channel C1 but with noise PSD Sn(f). It is further assumedthat the total noise power for both channels is the same; that is∫ W

−WSn(f)df =

∫ W

−W

N0

2df = N0W.

Which channel do you think has larger capacity? Give an intuitive reasoning.

Solution:

The capacity of the additive white Gaussian channel is:

C = W log2

(1 +

P

N0W

)For the nonwhite Gaussian noise channel, although the noise power is equal to the noise power inthe white Gaussian noise channel, the capacity is higher. The reason is that since noise samplesare correlated, knowledge of the previous noise samples provides partial information on the futurenoise samples and therefore reduces their effective variance.

3. Capacity of ISI channel.Consider a channel with Inter Symbol Interference (ISI) defined as follows

yk =

L−1∑i=0

hixk−i + zk.

The channel input obeys an average power constraint E{x2k} ≤ P , and the noise zk is i.i.d

Gaussian distributed: zk ∼ N(0, σ2z). Assume that H(ej2πf ) has no zeros and show that the

channel capacity is

C =1

2

∫ W

−Wlog

{1 +

[∆− σ2

z/|H(ej2πf )|2]+

σ2z/|H(ej2πf )|2

}df,

52

Page 53: תרגילים היילרן

where ∆ is a constant selected such that∫ W

−W

[∆− σ2

z

|H(ej2πf )|2

]+

df = P.

You may use the following theorem

Theorem 1. Let the transmitter have a maximum average power constraint of P [Watts]. Thecapacity of an additive Gaussian noise channel with noise power spectrum N(f)

[WattsHz

]is given

by

C =1

2

∫ π

−πlog2

{1 +

[ν −N(f)

]+N(f)

}df

[bits

sec

].

where ν is chosen so that∫ [ν −N(f)

]+df = P .

Solution:

Since H(ej2πf ) has no zeros the ISI ”filter” is invertible. Inverting the chennel results in

Y (ej2πf ) =Y (ej2πf )

H(ej2πf )

= X(ej2πf ) +Z(ej2πf )

H(ej2πf )

= X(ej2πf ) + Z(ej2πf ).

This is a problem of colored Gaussian channel with no ISI. The channel PSD is

SZZ(ej2πf ) =σ2z

|H(ej2πf )|2.

The capacity of this channel, using Theorem 1 is given by

C =1

2

∫ W

−Wlog

{1 +

[∆− σ2

z/|H(ej2πf )|2]+

σ2z/|H(ej2πf )|2

}df,

where ∆ is a constant selected such that∫ W

−W

[∆− σ2

z

|H(ej2πf )|2

]+

df = P.

4. Final B, 2011.Consider a communication system (denoted by system A) consists of two transmitters (Tx1 andTx2) capable of simultaneous transmission and one receiver.

• Tx1 transmits a real signal one dimensioned with power constraint P [watts], using thefrequency band f1L ≤ f ≤ f1U , f1L = 0. In the frequency band f1L ≤ f ≤ f1U the channelfrequency response is constant, real, and equals A > 0 such that the received signal power isPA2. The noise is an AWGN with spectral density N0

2

[WattsHz

]. Let W1 = f1U − f1L denote

the channel bandwidth for Tx1.

• Tx2 transmits a real signal with power constraint P [watts], using the frequency band f2L ≤f ≤ f2U , f2L > 0. In the frequency band f2L ≤ f ≤ f2U the channel frequency response isconstant, real, and equals B > 0 such that the received signal power is PB2. The noise is anAWGN with spectral density N0

2

[WattsHz

]. Let W2 = f2U −f2L denote the channel bandwidth

for Tx2.

53

Page 54: תרגילים היילרן

Figure 16: Frequency bands for Tx1 and Tx2.

• It is also given that the frequencies f1U and f2L obey f2L > f1U (Tx1 frequency band andTx2 frequency band do not overlap), see Figure 16.

• Each transmitter operates separately (the transmitters cannot exchange transmission power).The receiver receives the entire bandwidth f1L ≤ f ≤ f2U .

(a) Find the capacity of system A.

System B improves system A by transmitting a complex signal in place of the real signal. Similarlyto system A, each transmitter operates separately (the transmitters cannot exchange transmissionpower).

(b) Find the capacity of system B.

System C improves system B by enabling transmission power exchange between the two trans-mitters, resulting in a single effective transmitter with transmission power constraint 2P [watts].Note that this single transmitter is capable of simultaneous transmission in both frequency bands.

(c) For A = B,W1 = W2, compare the capacity of system C with the capacity of system B.Explicit capacity calculation is not needed. Explain your answer.

(d) For A < B,W1 = W2, compare the capacity of system C with the capacity of system B.Explicit capacity calculation is not needed. Explain your answer.

Solution:

(a) Let Ci, i = 1, 2 denote the capacity corresponding to Txi and Wi. Let CA denote the capacityof system A. Since the two transmitters are separate CA = C1 + C2. Using Theorem 1presented in class, and the fact that in the frequency band of Tx1 the channel gain is A, C1

equals

C1 = W1 log2

(1 +

A2P

N0W1

).

Similarly, C2 equals

C2 = W2 log2

(1 +

B2P

N0W2

),

and the capacity of system A is

CA = C1 + C2 = W1 log2

(1 +

A2P

N0W1

)+W2 log2

(1 +

B2P

N0W2

).

54

Page 55: תרגילים היילרן

(b) Using Theorem 2 presented in class, as each transmitter is independent, we find the capacityfor each transmitter separately. From Theorem 2

C =

∫ W

−Wlog2

(1 +

P (f)|H(f)|2

N0df

), P (f) =

(∆− N0

|H(f)|2

)+

, P =

∫ W

−WP (f)df.

For Tx1, |H(f)| = A, hence

C1 =

∫ −f1U−f1U

log2

(1 +

P1(f)A2

N0df

), P1(f) =

(∆− N0

A2

)+

, P =

∫ f1U

−f1UP1(f)df.

The equation P1(f) =(∆− N0

A2

)+indicate that the power allocation is constant over 0 ≤

|f | ≤ f1U and zero otherwise. Using the last equation with P1(f) = K for 0 ≤ |f | ≤ f1U

yields

2W1K = P ⇒ P1(f) = K =P

2W1,

and

C1 = 2W1 log2

(1 +

A2P

2W1N0

).

Following similar arguments for Tx2 yields

C2 = 2W2 log2

(1 +

B2P

2W2N0

),

and the capacity of system B is

CB = C1 + C2 = 2W1 log2

(1 +

A2P

2W1N0

)+ 2W2 log2

(1 +

B2P

2W2N0

).

(c) When there is one transmitter that can transmit on both bands simultaneously, the capacityis given by the water-filling theorem. Since A = B,W1 = W2, the channel is fixed over bothbands and the same power is allocated to each band. Thus, P will be allocated to W1 andP will be allocated to W2, and the capacity of both systems will be identical.

(d) Since the channel is different in each band the water-filling allocation will not be constant infrequency and the capacity will be higher for system C.

55

Page 56: תרגילים היילרן

10 Continuous Phase Modulations

1. [1, Problem 4.14].Consider an equivalent low-pass digitally modulated signal of the form

u(t) =∑n

[ang(t− 2nT )− jbng(t− 2nT − T )]

where {an} and {bn} are two sequences of statistically independent binary digits and g(t) is asinusoidal pulse defined as

g(t) =

{sin(πt2T

), 0 ≤ t ≤ 2T,

0, otherwise.

This type of signal is viewed as a four-phase PSK signal in which the pulse shape is one-halfcycle of a sinusoid. Each of the information sequences {an} and {bn} is transmitted at a rate of1

2T

[bitssec

]and, hence, the combined transmission rate is 1

T

[bitssec

]. The two sequences are staggered

in time by T seconds in transmission. Consequently, the signal u(t) is called staggered four-phasePSK.

(a) Show that the envelope |u(t)| is a constant, independent of the information an on the in-phase component and information bn on the quadrature component. In other words, theamplitude of the carrier used in transmitting the signal is constant.

(b) Determine the power density spectrum of u(t).

(c) Compare the power density spectrum obtained from (1b) with the power density spectrumof the MSK signal [1, 4.4.2]. What conclusion can you draw from this comparison?

Solution:

(a) Since the signaling rate is 12T for each sequence and since g(t) has duration 2T , for any time

instant only g(t− 2nT ) and g(t− 2nT −T ) or g(t− 2nT +T ) will contribute to u(t). Hence,for 2nT ≤ t ≤ 2nT + T :

|u(t)|2 = |ang(t− 2nT )− jbng(t− 2nT − T )|2

= a2ng

2(t− 2nT ) + b2ng2(t− 2nT − T )

= g2(t− 2nT ) + g2(t− 2nT − T )

= sin2( πt

2T

)+ sin2

(π(t− T )

2T

)= sin2

( πt2T

)+ cos2

( πt2T

)= 1, ∀t.

(b) The power density spectrum is:

SU (f) =1

T|G(f)|2

where G(f) =∫∞−∞ g(t)e−j2πftdt =

∫∞−∞ sin

(πt2T

)e−j2πftdt. By using Euler’s formula it is

easily shown that:

G(f) =4T

π

cos(2πTf)

1− 16T 2f2e−j2πfT

⇒ SU (f) =16T

π2

cos2(2πTf)

(1− 16T 2f2)2

56

Page 57: תרגילים היילרן

(c) The above power density spectrum is identical to that for the MSK signal. Therefore, theMSK signal can be generated as a staggered four phase PSK signal with a half-period sinu-soidal pulse for g(t).

2. [1, Problem 5.29].In an MSK signal, the initial state for the phase is either 0 or π rad. Determine the terminalphase state for the following four input pairs of input data b0, b1: (a) 00; (b) 01; (c) 10; (d) 11.

Solution:

We assume that the input bits 0, 1 are mapped to the symbols -1 and 1 respectively. The terminalphase of an MSK signal at time instant n is given by

θ(n; s) =π

2

n∑k=0

sk + θ0

where θ0 is the initial phase and sk is ±1 depending on the input bit at the time instant k. Thefollowing table shows θ(1; s) for two different values of θ0, and the four input pairs of data:

θ0 b0 b1 s0 s1 θ(1; s)0 0 0 -1 -1 −π0 0 1 -1 1 00 1 0 1 -1 00 1 1 1 1 ππ 0 0 -1 -1 0π 0 1 -1 1 ππ 1 0 1 -1 ππ 1 1 1 1 2π

3. [1, Problem 5.30].A continuous-phase FSK signal with h = 1

2 is represented as

s(t) = ±√

2εbTb

cos

(πt

2Tb

)cos(2πfct

)±√

2εbTb

sin

(πt

2Tb

)sin(2πfct

), 0 ≤ t ≤ 2Tb

where the ± signs depend on the information bits transmitted.

(a) Show that this signal has constant amplitude.

(b) Sketch a block diagram of the modulator for synthesizing the signal from the input bit stream.

(c) Sketch a block diagram of the demodulator and detector for recovering the information bitstream from the signal.

Solution:

(a) The envelope of the signal is∣∣s(t)∣∣2 =

√∣∣sc(t)∣∣2 +∣∣ss(t)∣∣2

=

√2εbTb

cos2

(πt

2Tb

)+

2εbTb

sin2

(πt

2Tb

)=

√2εbTb

57

Page 58: תרגילים היילרן

(b) The signal s(t) is equivalent to an MSK signal. Figure 17 depicts a block diagram of themodulator for synthesizing the signal. In Figure 17 xe are the even pulse sequence and xoare the odd pulse sequence.

Figure 17: Modulator block diagram.

(c) Figure 18 depicts a block diagram of the demodulator

Figure 18: Demodulator block diagram.

4. [1, Problem 5.31]9

Sketch the state trellis, and the state diagram for partial-response CPM with h = 12 and

g(t) =

{1

4T , 0 ≤ t ≤ 2T,

0, otherwise.

Solution:

Since p = 2, m is odd (m = 1), L = 2 and M = 2, there are

Ns = 2pM = 8

phase states, which we denote as Qn = (θn, sn−1). The 2p = 4 phase states corresponding to θnare

Θ =

{0,π

2, π,

2

},

9Read [1, Subsection 4.3.3].

58

Page 59: תרגילים היילרן

and therefore, the eight states Qn are{(0, 1) , (0,−1) ,

(π2, 1),(π

2,−1

), (π, 1) , (π,−1) ,

(3π

2, 1

),

(3π

2,−1

)}.

Having at our disposal the state (θn, sn−1) and the transmitted symbol sn, we can find the newphase state as

(θn, sn−1)sn−→ (θn +

π

2sn−1, sn) = (θn+1, sn).

The trellis diagram is depicted in Figure 19. In Figure 19 solid lines denote transition correspond-ing to sn = 1, while dashed lines denote transitions corresponding to sn = −1.

Figure 19: Trellis diagram.

The state diagram is depicted in Figure 20.

Figure 20: State diagram.

59

Page 60: תרגילים היילרן

11 Colored AGN Channel

1. Colored noise.Consider the following four equiprobable signals

s0(t) =1√π

cos(t), s1(t) =1√π

sin(t), s2(t) = −s0(t), s3(t) = −s1(t), 0 ≤ t ≤ 2π

The received signal obeys r(t) = s(t) + n(t), where n(t) is a colored Gaussian noise with thefollowing PSD

SN (ω) =N0

2

ω2

1 + ω2, ω =

[rad

sec

]the noise n(t) and the signal s(t) are independent.

(a) The optimal receiver for this scenario consists of a whitening filter, H(ω), followed by anoptimal receiver for the AWGN channel. What should be the whitening filter amplitude,|H(ω)|2, so the noise at the filter output will be white?

(b) Find the above H(ω) and the corresponding h(t) which can be composed of an adder andan integrator.

(c) For a noise-free channel, what are the transmitted signals, si(t), i = 1, . . . , 3, at the outputof the whitening filter?

(d) Let si(t) , si(t) ∗ h(t), i = 1, . . . , 3, where h(t) is the impulse response of H(ω). Find aset of real orthonormal basis functions which span the set S = (s0(t), . . . , s3(t)). Find theprojection of each element in the set S on the basis functions.

(e) Sketch the optimal receiver.

Solution:

(a) The noise at the filter output will be white if

|H(ω)|2 ω2

1 + ω2

N0

2= Constant.

(b) Let the constant be N0

2 , hence |H(ω)|2 = 1+ω2

ω2 . One of the filters which obeys |H(ω)|2 = 1+ω2

ω2

is

H0(ω) =1 + jω

jω= 1 +

1

jω.

The impulse response of H0(ω) is h(t) = δ(t) + u(t)− 12 , where u(t) denotes a step function.

Hence

si(t) = si(t) ∗ [δ(t) +u(t)− 1

2] = si(t) +

∫ t

−∞si(τ)dτ − 1

2

∫ ∞−∞

si(τ)dτ = si(t) +

∫ t

−∞si(τ)dτ,

where the last step follows from the fact that∫ ∞−∞

si(τ)dτ =

∫ 2π

0

si(τ)dτ = 0, i = 0, 1, 2, 3.

Therefore H0(ω) can be implemented using an adder and an integrator.

60

Page 61: תרגילים היילרן

(c)

si(t) = si(t) +

∫ t

−∞si(τ)dτ = si(t) +

∫ min{2π,t}

0

si(τ)dτ

=

0, t < 0

si(t) +∫ t

0si(τ)dτ, 0 ≤ t ≤ 2π∫ 2π

0si(τ)dτ, t > 2π

=

{si(t) +

∫ t0si(τ)dτ, t ∈ [0, 2π]

0, t /∈ [0, 2π]

Therefore the filtered signals are

s0(t) =1√π

(cos(t) + sin(t)), s1(t) =1√π

(sin(t) + 1− cos(t))

s2(t) = −s0(t), s3(t) = −s1(t), t ∈ [0, 2π]

(d) The functions φ0 = sin(t) + cos(t) and φ1 = sin(t) + 1 − cos(t) are orthogonal at the range[0, 2π], hence establish a basis of the signal space. In order to have an orthonormal basis weshould normalize φ0 and φ1

‖φ0(t)‖2 = 〈sin(t) + cos(t), sin(t) + cos(t)〉 = 2π

‖φ1(t)‖2 = 〈sin(t) + 1− cos(t), sin(t) + 1− cos(t)〉 = 4π

Hence the orthonormal basis is

ϕ0(t) =1√2π

(sin(t) + cos(t)), ϕ1(t) =1√4π

(sin(t) + 1− cos(t))

The vectors of the whitened signals are

s0 =

[√2

0

], s1 =

[02

], s2 = −s0, s3 = −s1.

(e) A type-II is depicted in Figure 21.

Figure 21: Type-II receiver for ACGN.

where T = 2π.

61

Page 62: תרגילים היילרן

The whitening filter can be integrated into the match filters. Since cos(2π − t) = cos(t) andsin(2π − t) = − sin(t)

s0(2π − t) =

{1√2π

(cos(t)− sin(t)), 0 ≤ t ≤ 2π,

0, otherwise.

s1(2π − t) =

{1√π

(1− cos(t)− sin(t)), 0 ≤ t ≤ 2π,

0, otherwise.

Hence (δ(t) + u(t)

)∗ s0(2π − t) =

1√π

(2 cos(t)− 1)(δ(t) + u(t)

)∗ s1(2π − t) =

1√π

(f(t)− 2 sin(t))

where the definition range of functions cos, sin and 1 is 0 ≤ t ≤ 2π, and the function f(t) isdefined as follows

f(t) ,

0, t < 0,

t, 0 ≤ t ≤ 2π,

2π, t > 2π.

Figure 22 depicts an optimal receiver in which the whitening filter is integrated into thematched filters.

Figure 22: Type-II receiver for ACGN with whitening filter integrated into the match filters.

2. Final A, 2011.Consider the following two equiprobable signals

s0(t) =

{√ET , 0 ≤ t ≤ T,

0, otherwise.s1(t) =

{−√

ET , 0 ≤ t ≤ T,

0, otherwise.

The above signals are transmitted through an additive colored Gaussian noise (ACGN) channelwith PSD SN (f). The noise PSD, SN (f), obeys∫ ∞

−∞

∣∣ ln (SN (f))∣∣

1 + f2df <∞.

62

Page 63: תרגילים היילרן

Prove that the probability of error of the optimal receiver is given by p(e) = Q(√αE), where

α ,1

π

∫ ∞−∞

(sin(x)

x

)2

· 1

SN(xπT

)dxSolution:

Since the PSD of the noise satisfies the Paley-Wiener condition, a minimum-phase whitening filterH(f) such that |H(f)|2 = N0

21

SN (f) exists. The optimal decoder then first whitens the noise and

then performs MAP decoding on the modified signals: s0(t) = s0(t) ∗h(t) and s1(t) = s1(t) ∗h(t),assuming T is large enough. Since the signals are anti-podal and equiprobable, the probability oferror is given by

p(e) = Pr {s0(t)}Pr {error|s0(t)}+ Pr {s1(t)}Pr {error|s1(t)} = Pr {error|s0(t)} .

Next, note that as after whitening we arrive at an AWGN with noise variance N0

2 then

Pr {error|s0(t)} = Q

(dmin√

2N0

),

where dmin = ‖s1 − s0‖, and s0 and s1 are e projections of s0(t) and s1(t), respectively, on anappropriate orthonormal basis. From Parsevals theorem it follows that

dmin = ‖s1 − s0‖

=

√∫ T

t=0

(s1(t)− s0(t))2dt

=

√∫ ∞f=−∞

∣∣∣S1(f)− S0(f)∣∣∣2 df

=

√∫ ∞f=−∞

|H(f)S1(f)−H(f)S0(f)|2 df

=

√∫ ∞f=−∞

|H(f)|2 |S1(f)− S0(f)|2 df.

Note that

S0(f) =

∫ ∞t=−∞

s0(t)e−j2πftdt

=

√E

T

∫ T

t=0

e−j2πftdt

=

√E

Te−jπfT

sin (πfT )

πf,

63

Page 64: תרגילים היילרן

and S1(f) = −S0(f). Hence

dmin =

√∫ ∞f=−∞

|H(f)|2 |S1(f)− S0(f)|2 df

=

√∫ ∞f=−∞

N0

2

1

SN (f)|2S1(f)|2 df

=

√∫ ∞f=−∞

2N01

SN (f)|S1(f)|2 df

=

√2N0E

T

√∫ ∞f=−∞

1

SN (f)

(sin (πfT )

πf

)2

df

x=πfT=

√2N0E

T

√∫ ∞x=−∞

1

SN ( xπT )

(sin (x)

xT

)2

dx1

πT

=√

2N0E

√∫ ∞x=−∞

1

π

1

SN ( xπT )

(sin (x)

x

)2

dx

=√

2N0E√α.

Finally we arrive at

Pr {error|s0(t)} = Q

(dmin√

2N0

)= Q

(√2N0E

√α√

2N0

)= Q

(√Eα).

64

Page 65: תרגילים היילרן

12 ISI Channels and MLSE

1. [1, Problem 10.2].In a binary PAM system, the clock that specifies the sampling of the correlator matched filteroutput is offset from the optimum sampling time by 10%.

(a) If the signal pulse used is rectangular, p(t) =

{A, 0 ≤ t < T,

0, otherwise., determine the loss in SNR

of the desired signal component sampled at the output of the MF due to the mistiming.

(b) Determine the ISI coefficients, fk, due to the mistiming and determine its effect on theprobability of error assuming per-symbol decoding designed for binary PAM over AWGN(no ISI) with equal a-priori probabilities.

Solution:

(a) If the transmitted signal is:

r(t) =

∞∑n=−∞

Inp(t− nT ) + n(t)

then the output of the receiving filter is

y(t) =

∞∑n=−∞

Inx(t− nT ) + v(t)

where x(t) = p(t) ∗ p∗(−t) and v(t) = n(t) ∗ p∗(−t). If the sampling time is off by 10%,then the samples at the output of the correlator are taken at t = (m± 1

10 )T . Assuming thatt = (m− 1

10 )T without loss of generality, then the sampled sequence is:

ym =

∞∑n=−∞

Inx((m− 1

10)T − nT ) + v((m− 1

10)T ).

If the signal pulse is rectangular with amplitudeA and duration T , then∑∞n=−∞ Inx((m− 1

10 )T − nT )is nonzero only for n = m and n = m− 1 and therefore, the sampled sequence is given by:

ym = Imx(− 1

10T ) + Im−1x(T − 1

10T ) + v((m− 1

10)T )

=9

10A2TIm +

1

10A2TIm−1 + v((m− 1

10)T )

The variance of the noise is:

σ2v =

N0

2A2T

and therefore, the SNR is:

SNR =

(9

10

)22(A2T )2

N0A2T=

81

100

2A2T

N0.

As it is observed, there is a loss of 10 log1081100 = −0.9151 dB due to the mistiming.

65

Page 66: תרגילים היילרן

(b) Recall from item (1a) that the sampled sequence is:

ym =9

10A2TIm +

1

10A2TIm−1 + vm.

The term 110A

2TIm−1 expresses the ISI introduced to the system. If Im = 1 is transmitted,then the probability of error is

Pr{e|Im = 1} =1

2Pr{e|Im = 1, Im−1 = 1}+

1

2Pr{e|Im = 1, Im−1 = −1}

=1

2√πN0A2T

∫ −A2T

−∞e− v2

N0A2T dv +

1

2√πN0A2T

∫ − 810A

2T

−∞e− v2

N0A2T dv

=1

2Q

(√2A2T

N0

)+

1

2Q

(√(8

10

)22A2T

N0

)

Since the symbols of the binary PAM system are equiprobable the previous derived expressionis the probability of error when a symbol by symbol detector is employed. Comparing thiswith the probability of error of a system with no ISI, we observe that there is an increase ofthe probability of error by

1

2Q

(√(8

10

)22A2T

N0

)− 1

2Q

(√2A2T

N0

).

2. [1, Problem 10.8].A binary antipodal signal is transmitted over a nonideal band-limited channel, which introducedISI over two adjacent symbols:

ym =∑k

Imxk−m + vm = Im +1

4Im−1 + vm.

where vm is an additive noise.

(a) Determine the average probability of error, assuming equiprobable signals and the additivenoise is white and Gaussian, using decoder designed for antipodal signals over AWGN (noISI).

(b) By plotting the error probability obtained in (2a) and that for the case of no ISI, determinethe relative difference in SNR of the error probability of 10−6.

Solution:

(a) The output of the matched filter at the time instant mT is:

ym =∑k

Imxk−m + vm = Im +1

4Im−1 + vm.

The autocorrelation function of the noise samples vm is:

E{vkvj} =N0

2xk−j

thus, the variance of the noise is

σ2v =

N0

2x0 =

N0

2.

66

Page 67: תרגילים היילרן

If a symbol by symbol detector is employed and we assume that the symbols Im = Im−1 =√εb have been transmitted, then the probability of error Pr{e|Im = Im−1 =

√εb} is:

Pr{e|Im = Im−1 =√εb} = Pr{ym < 0|Im = Im−1 =

√εb}

= Pr

{vm < −5

4

√εb

}= Q

(5

4

√2εbN0

)

If however Im−1 = −√εb}, then:

Pr{e|Im =√εb, Im−1 = −

√εb} = Pr

{vm < −3

4

√εb

}= Q

(3

4

√2εbN0

).

Since the symbols are equiprobable, we conclude that:

p(e) =1

2Q

(5

4

√2εbN0

)+

1

2Q

(3

4

√2εbN0

).

(b) Figure 23 depicts the error probability obtained in item (2a) vs. the SNR per bit and theerror probability for the case of no ISI. As it observed, the relative difference in SNR of theerror probability of 10−6 is 2 dB.

-7

-6.5

-6

-5.5

-5

-4.5

-4

-3.5

-3

-2.5

-2

6 7 8 9 10 11 12 13 14

SNR/bit, dB

log(P

(e)

Figure 23: Probability of error comparison.

3. [1, Problem 10.24].Consider a four-level PAM system with possible transmitted levels 3, 1,−1, and −3. The channelthrough which the data are transmitted introduces intersymbol interference over two successivesymbols. The equivalent discrete-time channel model obeys

yk =

{0.8Ik + nk, k = 1,

0.8Ik − 0.6Ik−1 + nk, k > 1.

where {nk} is a sequence of real-valued independent zero-mean Gaussian noise variables withvariance σ2 = N0.

67

Page 68: תרגילים היילרן

(a) Sketch the tree structure, showing the possible signal sequences for the received signals y1, y2,and y3.

(b) Suppose the Viterbi algorithm is used to detect the information sequence. How many metricsmust be computed at each stage of the algorithm?

(c) How many surviving sequences are there in the Viterbi algorithm for this channel?

(d) Suppose that the received signals are

y1 = 0.5, y2 = 2.0, y3 = −1.0

Determine the surviving sequences through stage y3 and the corresponding metrics.

Solution:

(a) Figure 24 depicts part of the tree.

1 I 2 I 3 I

3

1

3 -

1 -

3

1 1 - 3 -

3

1 1 -

3 -

3

1 1 -

3 -

3

1 1 -

3 -

3

1 1 - 3 -

Figure 24: Tree structure.

(b) There are four states in the trellis (corresponding to the four possible values of the symbolIk−1), and for each one there are four paths starting from it (corresponding to the fourpossible values of the symbol Ik). Hence, 16 metrics must be computed at each stage of theViterbi algorithm.

(c) Since, there are four states, the number of surviving sequences is also four.

(d) The metrics are

µk =

{(y1 − 0.8Ik)2, k = 1,∑k (yk − 0.8Ik + 0.6Ik−1)2, k > 1.

68

Page 69: תרגילים היילרן

Table 1 details the metric for the first stage.

I1 µ1

3 3.611 0.09-1 1.69-3 8.41

Table 1: First stage metric.

Table 2 details the metric for the second stage.

I2 I1 µ2(I2, I1)

3 3 5.573 1 0.133 -1 6.533 -3 13.25

1 3 12.611 1 3.331 -1 2.051 -3 8.77

-1 3 24.77-1 1 11.65-1 -1 6.53-1 -3 9.41

-3 3 42.05-3 1 25.09-3 -1 16.13-3 -3 15.17

Table 2: Second stage metric.

The four surviving paths at this stage are minI1{µ2(x, I1)

}, x = 3, 1,−1, 3:

(I2, I1) = (3, 1) : µ2(3, 1) = 0.13

(I2, I1) = (1,−1) : µ2(1,−1) = 2.05

(I2, I1) = (−1,−1) : µ2(−1,−1) = 6.53

(I2, I1) = (−3,−3) : µ2(−3,−3) = 15.17

Table 3 details the metric for the third stage.

The four surviving paths at this stage are minI2,I1{µ3(x, I2, I1)

}, x = 3, 1,−1, 3:

(I3, I2, I1) = (3, 3, 1) : µ3(3, 3, 1) = 2.69

(I3, I2, I1) = (1, 3, 1) : µ3(1, 3, 1) = 0.13

(I3, I2, I1) = (−1, 3, 1) : µ3(−1, 3, 1) = 2.69

(I3, I2, I1) = (−3,−3) : µ3(−3, 1,−1) = 2.69

69

Page 70: תרגילים היילרן

I3 I2 I1 µ3(I3, I2, I1)

3 3 1 2.693 1 -1 9.893 -1 -1 22.533 -3 -3 42.21

1 3 1 0.131 1 -1 7.811 -1 -1 12.291 -3 -3 28.13

-1 3 1 2.69-1 1 -1 2.69-1 -1 -1 7.17-1 -3 -3 19.17

-3 3 1 10.37-3 1 -1 2.69-3 -1 -1 7.17-3 -3 -3 15.33

Table 3: Third stage metric.

4. [4, Problem 8.2].In a binary equiprobable PAM system the input to the detector is

rn = sn + wn + bn,

where sn = ±1 is the desired signal, wn is a zero mean Gaussian random variable with varianceσ2w, and bn represents the ISI due to the channel distortion. The ISI term is a random variable

which takes the values, − 12 , 0,

12 with probabilities 1

4 ,12 ,

14 , respectively. Determine the average

probability of error, p(e), of an optimal symbol by symbol detector as a function of σ2w.

Solution:

An optimal symbol by symbol detector for the binary PAM compares the received signal, rn, tozero. The received signal (detector input) is

rn =

sn + wn − 1

2 , w.p. 14 ,

sn + wn + 12 , w.p. 14 ,

sn + wn, w.p. 12 .

By symmetry p(e) = Pr{error|s = 1} = Pr{error|s = −1}, hence,

p(e) = Pr{error|s = −1}

=1

2Pr{w − 1 > 0}+

1

4Pr{w − 3

2> 0}+

1

4Pr{w − 1

2> 0}

=1

2Pr{w > 1}+

1

4Pr{w − 1

2> 1}+

1

4Pr{w +

1

2> 1}

=1

2Q

(1

σw

)+

1

4Q

(3

2σw

)+

1

4Q

(1

2σw

).

70

Page 71: תרגילים היילרן

13 Equalization

1. [3, Problem 11.13].This problem illustrates the noise enhancement of zero-forcing equalizers, and how this enhance-ment can be mitigated using an MMSE approach. Consider a frequency-selective fading channelwith baseband frequency response

H(f) =

1, 0 ≤ |f | < 10KHz,

1/2, 10KHz ≤ |f | < 20KHz,

1/3, 20KHz ≤ |f | < 30KHz,

1/4, 30KHz ≤ |f | < 40KHz,

1/5, 40KHz ≤ |f | < 50KHz,

0, otherwise.

The frequency response is symmetric in positive and negative frequencies. Assume an AWGNchannel with noise PSD N0 = 10−9 W/Hz.

(a) Find a ZF analog equalizer that completely removes the ISI introduced by H(f).

(b) Find the total noise power at the output of the equalizer from item (1a).

(c) Assume a MMSE analog equalizer of the form Heq(f) = 1H(f)+α . Find the total noise power

at the output of this equalizer for an AWGN input with PSD N0 for α = 0.5 and for α = 1.

(d) Describe qualitatively two effects on a signal that is transmitted over channel H(f) andthen passed through the MMSE equalizer Heq(f) = 1

H(f)+α with α > 0. What design

considerations should go into the choice of α?

(e) What happens to the total noise power for the MMSE equalizer in item (1c) as α → ∞?What is the disadvantage of letting α→∞ in this equalizer design?

Solution:

(a)

Hzf (f) =1

H(f)=

1, 0 ≤ |f | < 10KHz,

2, 10KHz ≤ |f | < 20KHz,

3, 20KHz ≤ |f | < 30KHz,

4, 30KHz ≤ |f | < 40KHz,

5, 40KHz ≤ |f | < 50KHz.

(b) The noise spectrum at the output of the filter is given by N(f) = N0|Heq(f)|2, and the noisepower is given by the integral of N(f) from −50 kHz to 50 kHz:

N =

∫ 50KHz

−50KHz

N(f)df = 2N0

∫ 50KHz

0

|Heq(f)|2df

= 2N0(1 + 4 + 9 + 16 + 25)(10KHz)

= 1.1mW.

(c) The noise spectrum at the output of the filter is given by N(f) = N0

(H(f)+α)2 , and the noise

power is given by the integral of N(f) from −50 kHz to 50 kHz. For α = 0.5 we get

N = 2N0(0.44 + 1 + 1.44 + 1.78 + 2.04)(10KHz) = 0.134mW.

For α = 1 we get

N = 2N0(0.24 + 0.44 + 0.56 + 0.64 + 0.69)(10KHz) = 0.0516mW.

71

Page 72: תרגילים היילרן

(d) As α increases, the frequency response Heq(f) decreases for all f . Thus, the noise powerdecreases, but the signal power decreases as well. The factor α should be chosen to balancemaximizing the SNR and minimizing distortion, which also depends on the spectrum of theinput signal (which is not given here).

(e) As α → ∞, the noise power goes to 0 because Heq(f) → 0 for all f . However, the signalpower also goes to zero.

2. [1, Problem 10.10].Binary PAM is used to transmit information over an unequalized linear filter channel. Whena = 1 is transmitted, the noise-free output of the demodulator is

xm =

0.3, m = 1,

0.9, m = 0,

0.3, m = −1,

0, otherwise.

(a) Design a three-tap zero-forcing linear equalizer so that the output is

qm =

{1, m = 0,

0, m = ±1.

Remark 4. qm does not have to be causal.

(b) Determine qm for m = ±2,±3, by convolving the impulse response of the equalizer with thechannel response.

Solution:

(a) If by {cn} we denote the coefficients of the FIR equalizer, then the equalized signal is:

qm =

1∑n=−1

cnxn−m

which in matrix notation is written as0.9 0.3 00.3 0.9 0.30 0.3 0.9

c−1

c0c1

=

010

The coefficients of the zero-forcing equalizer can be found by solving the above matrix equa-tion. Thus c−1

c0c1

=

−0.47621.4286−0.4762

.

72

Page 73: תרגילים היילרן

(b) The values of qm for m = ±2,±3 are given by

q2 =

1∑n=−1

cnx2−n = c1x1 = −0.1429

q−2 =

1∑n=−1

cnx−2−n = c−1x−1 = −0.1429

q3 =

1∑n=−1

cnx3−n = 0

q−3 =

1∑n=−1

cnx−3−n = 0.

3. [1, Problem 10.15].Repeat problem (2) using the MMSE as the criterion for optimizing the tap coefficients. Assumethat the noise PSD is 0.1 W/Hz.

Solution:

A discrete time transversal filter equivalent to the cascade of the transmitting filter gT (t), thechannel c(t), the matched filter at the receiver gR(t) and the sampler, has tap gain coefficients{xm}, where

xm =

0.9, m = 0,

0.3, m = ±1,

0, otherwise.

The noise νk, at the output of the sampler, is a zero-mean Gaussian sequence with autocorrelationfunction:

E{νkνl} = σ2xk−l, |k − l| ≤ 1.

If the Z-transform of the sequence {xm}, X(z), assumes the factorization:

X(z) = F (z)F ∗(1/z∗)

then the filter 1/F ∗(1/z∗) can follow the sampler to white the noise sequence νk. In this case theoutput of the whitening filter, and input to the MSE equalizer, is the sequence

un =∑k

Ikfn−k + nk

where nk is zero mean white Gaussian with variance σ2. The optimum coefficients of the MSEequalizer, ck, satisfy:

1∑n=−1

ckΓn−k = ξk, k = −1, 0, 1

where

Γn−k =

{xn−k + σ2δn,k, |n− k| ≤ 1,

0, otherwise.

ξk =

{f−k, −1 ≤ k ≤ 0,

0, otherwise.

73

Page 74: תרגילים היילרן

WithX(z) = 0.3z + 0.9 + 0.3z−1 = (f0 + f1z

−1)(f∗0 + f∗1 z)

we obtain the parameters f0 and f1 as:

f0 =

{±√

0.7854

±√

0.1146f1 =

{±√

0.1146

±√

0.7854

The parameters f0 and f1 should have the same sign since f0f1 = 0.3. To have a stable inversesystem 1/F ∗(1/z∗), we select f0 and f1 in such a way that the zero of the system F ∗(1/z∗) =f∗0 + f∗1 z is inside the unit circle. Thus, we choose f0 =

√0.1146 and f1 =

√0.7854 and therefore,

the desired system for the equalizers coefficients is:0.9 + 0.1 0.3 00.3 0.9 + 0.1 0.30 0.3 0.9 + 0.1

c−1

c0c1

=

√0.7854√0.1146

0

Solving this system, we obtain

c−1 = 0.8596, c0 = 0.0886, c1 = −0.0266.

4. [1, Problem 10.21]10.Consider the following channel

yn =1√2In +

1√2In−1 + vn

{vn} is a real-values white noise Gaussian sequence with zero mean and variance N0. Supposethe channel is to be equalized by DFE having a two-tap feedforward filter (c0, c−1) and a one-tapfeedback filter (c1). The {ci} are optimized using the MSE criterion.

(a) Determine exactly the optimum coefficients as a function of N0 and approximate their valuesfor N0 � 1.

(b) Determine the exact value of the minimum MSE and a find first order approximation (interms of N0) appropriate to the case N0 � 1. Assume E{I2

n} = 1.

(c) Determine the exact value of the output SNR for the three-tap equalizer as a function of N0

and find a first order approximation appropriate to the case N0 � 1.

(d) Compare the results in items (4b) and (4c) with the performance of the infinite-tap DFE.

(e) Evaluate and compare the exact values of the output SNR for the three-tap and infinite-tapDFE in the special case where N0 = 0.1 and N0 = 0.01. Comment on how well the three-tapequalizer performs relative to the infinite-tap equalizer.

Solution:

(a) The tap coefficients of the feedforward filter are given by the following equations:

0∑j=−K1

cjψlj = f∗−l, −K1 ≤ l ≤ 0,

10Read [1, Sub-section 10.3.2] and [1, Example 10.3.1]

74

Page 75: תרגילים היילרן

where

ψlj =

−l∑m=0

f∗mfm+l−j +N0δlj , −K1 ≤ l, j ≤ 0.

The tap coefficients of the feedback filter of the DFE are given in terms of the coefficients ofthe feedforward section by the following equations:

ck = −0∑

j=−K1

cjfk−j , 1 ≤ k ≤ K2,

In this case, K1 = 1, resulting in the following two equations:

ψ0,0c0 + ψ0,−1c−1 = f∗0

ψ−1,0c0 + ψ−1,−1c−1 = f∗1

From the definition of ψlj the above system can be written as:[12 +N0

12

12

12 +N0

] [c0c−1

]=

[1√2

1√2

]so: [

c0c−1

]=

1√2(N2

0 + 32N0 + 1

4

) [ 12 +N0

N0

]≈[ √

2

2√

2N0

], for N0 � 1

The coefficient for the feedback section is:

c1 = −c0f1 = − 1√2c0 ≈ −1, for N0 � 1.

(b)

Jmin(1) = 1−0∑

j=−K1

cjf−j =2N2

0 +N0

2(N2

0 + 32N0 + 1

4

) ≈ 2N0, for N0 � 1

(c)

γ =1− Jmin(1)

Jmin(1)=

1 + 4N0

2N0(1 + 2N0)≈ 1

2N0, for N0 � 1

(d) For the infinite tap DFE, we have from [1, Example 10.3.1]:

Jmin =2N0

1 +N0 +√

(1 +N0)2 − 1≈ 2N0, for N0 � 1

γ∞ =1− JminJmin

=1 +N0 +

√(1 +N0)2 − 1

2N0

(e) For N0 = 0.1 we have:

Jmin(1) = 0.146, γ = 5.83 (7.66 dB)

Jmin = 0.128, γ∞ = 6.8 (8.32 dB)

For N0 = 0.01 we have:

Jmin(1) = 0.0193, γ = 51 (17.1 dB)

Jmin = 0.0174, γ∞ = 56.6 (17.5 dB)

The three-tap equalizer performs very well compared to the infinite-tap equalizer. Thedifference in performance is 0.6 dB for N0 = 0.1 and 0.4 dB for N0 = 0.01.

75

Page 76: תרגילים היילרן

5. Final C, 2011.Let yn be the signal at the output of an ISI channel

yn =

∞∑i=−∞

xisn−i + vn,

where {sn}∞n=−∞ is the transmitted symbol sequence. The symbols are selected in an i.i.d man-ner from a constellation A, with average energy EA. {vn}∞n=−∞ is a zero mean complex whiteGaussian noise with variance N0. {xi}∞i=−∞ is the channel coefficients vector. Figure 25 depictsthe frequency response magnitude of {xi}∞i=−∞.

Figure 25: Frequency response magnitude of {xi}∞i=−∞.

The signal {yn}∞n=−∞ is filtered by a zero-forcing equalizer. Find the SNR per symbol at theequalizer output as a function of EA and N0.

Solution:

The z-transform of the received signal is

Y (z) = S(z)X(z) + V (z).

The frequency response depicted in Figure 25 indicate that X(z) is invertible. Thus, after filteringwith 1

X(z) we obtain

Y (z) = S(z) + V (z)1

X(z)= S(z) + V (z),

and the PSD of the filtered noise is

SV(ejω)

=N0

|X (ejω)|2.

Hence, the energy of the noise is

σ2v =

1

∫ π

ω=−πSV(ejω)dω

=1

π

∫ π

ω=0

N0

|X (ejω)|2dω

= N09

12.

Therefore, the SNR per symbol is EAσ2v

= 129EAN0

.

76

Page 77: תרגילים היילרן

14 Non-Coherent Reception

1. Minimal frequency difference for orthogonality.

(a) Consider the signals

si(t) =

{√2ET cos(2πfit), 0 ≤ t ≤ T,

0, otherwise., i = 0, 1

Both frequencies obey fiT � 1, i = 0, 1. What is the minimal frequency difference, |f0− f1|,required for the two signals, s0(t) and s1(t), to be orthogonal?

(b) Now an unknown phase is added to one of the signals

s0(t) =

{√2ET cos(2πf0t), 0 ≤ t ≤ T,

0, otherwise., s1(t) =

{√2ET cos(2πf1t+ φ), 0 ≤ t ≤ T,

0, otherwise.

Find the minimal frequency difference required for the two signals to be orthogonal, for anunknown φ.

Solution:

We first solve for the general case, and then assign φ = 0 for item 1a.

〈s0(t), s1(t)〉 =2E

T

∫ T

0

cos(2πf0t) cos(2πf1t+ φ)dt

=1

2· 2E

T

∫ T

0

(cos(2π(f0 + f1)t+ φ

)+ cos

(2π(f0 − f1)t− φ

))dt

= E ·

[sin(2π(f0 + f1)t+ φ

)2π(f0 + f1)T︸ ︷︷ ︸≈0 because fiT�1

+sin(2π(f0 − f1)t− φ

)2π(f0 − f1)T

]T0

≈ E ·sin(2π(f0 − f1)t− φ

)2π(f0 − f1)T

∣∣∣∣∣T

0

= 0︸︷︷︸demand

We now consider the special cases.

(a) For φ = 0:

〈s0(t), s1(t)〉 = 0⇒ sin(2π(f0 − f1)T

)= 0⇒ 2π(f0 − f1)T = nπ

where n is an integer, hence

|f0 − f1|min =1

2T

(b) For unknown φ:

〈s0(t), s1(t)〉 = 0

⇒ sin[2π(f0 − f1)t− φ

]T0

= 0

⇒ sin(2π(f0 − f1)T − φ

)− sin(−φ) = 0

⇒(2π(f0 − f1)T − φ

)− (−φ) = n · 2π

77

Page 78: תרגילים היילרן

where the last step follows from the demand that the result will be zero for any φ, hence werequire that the difference between

(2π(f0 − f1)T − φ

)and (−φ) will equal n · 2π, where n

is an integer.Hence, the minimal frequency difference for the non-coherent scenario is

|f0 − f1|min =1

T

We conclude that for the non-coherent scenario a double bandwidth is required comparingto the coherent scenario.

2. Non coherent receiver for M orthogonal signals.Consider the following M orthogonal signals

si(t) =

√2E

Tsin(ωit), 0 ≤ t ≤ T, i = 0, 1, . . . ,M − 1.

The received signal is

r(t) =

√2E

Tsin(ωit+ φ) + n(t)

where φ ∼ U [0, 2π) and n(t) is white Gaussian noise with PSD N0

2 .

The set {rs,i, rc,i}M−1i=0 is sufficient statistic for decoding r(t), where

rc,i =

∫ T

0

r(t)

√2

Tcos(ωit)dt, rs,i =

∫ T

0

r(t)

√2

Tsin(ωit)dt

In class it was obtained that the optimal receiver for equiprobable a-priori probabilities finds themaximal r2

i = r2c,i + r2

s,i, and chooses the respective si(t).

The probability density function (PDF) of r0 and ri, i = 1, . . . ,M − 1, given that s0(t) wastransmitted, are:

f(r0|s0) =2r0

N0e−

r20N0 e−

EN0 I0

(2√E

N0r0

), r0 ≥ 0

f(ri|s0) =2riN0

e−r2iN0 , ri ≥ 0, i = 1, . . . ,M − 1

For equiprobable a-priori probabilities and M = 2, the error probability of the optimal receiver is

p(e) =1

2e−

E2N0

Show that for equiprobable a-priori probabilities and general M , the error probability of theoptimal receiver is

p(e) =

M−1∑i=1

(M − 1i

)· (−1)i+1 · 1

i+ 1· e−

ii+1

EN0

Guideline: Let A, B and C be i.i.d RVs with PDF fY (y). Let X = max{A,B,C}. Derive thePDF fX(x).

Solution:

78

Page 79: תרגילים היילרן

Due to symmetry

p(e) =

M−1∑i=0

p(e|si)p(si) = p(e|s0)

The probability of error given s0(t) was transmitted obeys

p(e|s0) = Pr{rmax = max{r1, . . . , rM−1} > r0|s0}

Note: the ri, i = 1, . . . ,M − 1 are i.i.d.

For i.i.d random variables y1, . . . , yn with PDF fY (y) and cdf FY (y), the cdf of ymax = max{y1, . . . , yn}obeys

FYmax(y) = Pr{ymax < y} = Pr{y1, . . . , yn ≤ y}(a)=(FY (y)

)n⇒ fYmax(y) = n ·

(FY (y)

)n−1 · fY (y)

where (a) follows from the fact the the random variables are i.i.d.

In order to find f(rmax|s0) we need to find F (ri|s0):

F (ri|s0) =

∫ ri

0

2t

N0e−

t2

N0 dt = 1− e−r2iN0

Hence

f(rmax|s0) = (M − 1)(1− e−

r2maxN0

)M−2 · 2rmaxN0

e−r2maxN0

f(rmax|s0) can be expanded as follows

f(rmax|s0) = (M − 1)

M−2∑j=0

(− e−

r2maxN0

)j(M − 2j

)· 2rmaxN0

e−r2maxN0

=

M−2∑j=0

(M − 1)(−1)je−(j+1)r2max

N02rmaxN0

(M − 2j

)i=j+1

=

M−1∑i=1

(−1)i+1

(M − 1i

)e−

i·r2maxN0

2rmaxi

N0

In order to calculate p(e|s0) we need to integrate the whole region in which rmax > r0

p(e|s0) =

∫ ∞r0=0

f(r0|s0)

∫ ∞rmax=r0

f(rmax|s0)drmaxdr0

Assigning f(rmax|s0) to the inner integral yields∫ ∞rmax=r0

f(rmax|s0)drmax =

M−1∑i=1

(−1)i+1

(M − 1i

)∫ ∞r0

e−i·r2maxN0

2rmaxi

N0drmax︸ ︷︷ ︸

Rayleigh distribution

=

M−1∑i=1

(−1)i+1

(M − 1i

)e−

i·r20N0

79

Page 80: תרגילים היילרן

Hence

p(e|s0) =

∫ ∞r0=0

2r0

N0e−

r20N0 e−

EN0 I0

(2√E

N0r0

)·M−1∑i=1

(−1)i+1

(M − 1i

)e−

i·r20N0 dr0

Multiplying p(e|s0) byi+ 1

i+ 1eE/(i+1)2

N0/(i+1) e−E/(i+1)2

N0/(i+1) = 1

and rearranging the summation elements yields

p(e|s0) =

M−1∑i=1

(M − 1i

)· (−1)i+1 · 1

i+ 1· e−

ii+1

EN0︸ ︷︷ ︸

p(e)

·

∫ ∞0

2(i+ 1)r0

N0e− r20N0/(i+1) e

−E/(i+1)2

N0/(i+1) I0

(2√E/(i+ 1)2

N0/(i+ 1)r0

)dr0︸ ︷︷ ︸∫∞

0Rice distribution=1

=

M−1∑i=1

(M − 1i

)· (−1)i+1 · 1

i+ 1· e−

ii+1

EN0

3. [1, Problem 5.42].In on-off keying of a carrier-modulated signal, the two possible signals are

s0(t) = 0, 0 ≤ t ≤ T

s1(t) =

√2εbT

cos(2πfct+ φ), 0 ≤ t ≤ T.

The corresponding received signals are

r(t) = n(t), 0 ≤ t ≤ T

r(t) =

√2εbT

cos(2πfct+ φ) + n(t), 0 ≤ t ≤ T

where φ is the carrier phase and n(t) is AWGN.

(a) Sketch a block diagram of the receiver (demodulator and detector) that employs noncoherent(envelope) detection.

(b) Determine the PDFs for the two possible decision variables at the detector corresponding tothe two possible received signals.

(c) Derive the detector error probability assuming εbN0� 1, εb � 1, σ2 = N0

211.

11You may use the following approximations for estimating the integral 12

∫ VT0 f(r) r

σ2 I0(r√εb

σ2

)dr:

• For εbσ2 � 1: I0(

r√εb

σ2 ) ≈ e

r√εb

σ2√2π

r√εb

σ2

.

• For εbσ2 � 1:

√r

2πσ2√εb≈√

12πσ2 .

• For εbσ2 � 1: the optimum threshold VT ≈

√εb2

.

80

Page 81: תרגילים היילרן

Solution:

(a) Figure 26 depicts the noncoherent envelope detector for the on-off keying signal.

Figure 26: Envelope detector.

(b) If s0(t) is sent, then the received signal is r(t) = n(t) and therefore the sampled outputs rc,rs are zero-mean independent Gaussian random variables with variance σ2 = N0

2 . Hence,

the random variable r =√r2c + r2

s is Rayleigh distributed and the PDF is given by:

f(r|s0(t)) =r

σ2e−

r2

2σ2 =2r

N0e−

r2

N0 .

If s1(t) is transmitted, then the received signal is:

r(t) =

√2εbT

cos(2πfct+ φ) + n(t)

Crosscorrelating r(t) by√

2T cos(2πfct) and sampling the output at t = T , results in

rc =

∫ T

0

r(t)

√2

Tcos(2πfct)dt

=

∫ T

0

2√εbT

cos(2πfct+ φ) cos(2πfct)dt+

∫ T

0

n(t)

√2

Tcos(2πfct)dt

=√εb cos(φ) + nc

where nc is zero-mean Gaussian random variable with variance N0

2 . Similarly, for the quadra-ture component we have:

rs =√εb sin(φ) + ns

The PDF of the random variable r =√r2c + r2

s =√εb + n2

c + n2s follows the Rician distri-

bution:

f(r|s1(t)) =r

σ2e−

r2+εb2σ2 I0

(r√εb

σ2

)=

2r

N0e−

r2+εbN0 I0

(2r√εb

N0

)(c) For equiprobable signals the probability of error is given by

p(e) =1

2

∫ VT

−∞p(r|s1(t))dr +

1

2

∫ ∞VT

p(r|s0(t))dr

81

Page 82: תרגילים היילרן

Since r > 0 the expression for the probability of error takes the form

p(e) =1

2

∫ VT

0

p(r|s1(t))dr +1

2

∫ ∞VT

p(r|s0(t))dr

=1

2

∫ VT

0

r

σ2e−

r2+εb2σ2 I0

(r√εb

σ2

)dr +

1

2

∫ ∞VT

r

σ2e−

r2

2σ2 dr

The optimum threshold level is the value of VT that minimizes the probability of error.

However, when εbN0� 1 the optimum value is close to:

√εb2 and we will use this threshold

to simplify the analysis. The integral involving the Bessel function cannot be evaluated inclosed form. Instead of I0(x) we will use the approximation:

I0(x) ≈ ex√2πx

which is valid for large x, that is for high SNR. In this case:

1

2

∫ VT

0

r

σ2e−

r2+εb2σ2 I0

(r√εb

σ2

)dr ≈ 1

2

∫ √εb2

0

√r

2πσ2√εbe−(r−√εb)2/2σ2

dr

This integral is further simplified if we observe that for high SNR, the integrand is dominantin the vicinity of

√εb and therefore, the lower limit can be substituted −∞. Also√

r

2πσ2√εb≈√

1

2πσ2

and therefore:

1

2

∫ √εb2

0

√r

2πσ2√εbe−(r−√εb)2/2σ2

dr ≈ 1

2

∫ √εb2

0

√1

2πσ2e−(r−√εb)2/2σ2

dr

=1

2Q

(√εb

2N0

)

Finally:

p(e) =1

2Q

(√εb

2N0

)+

1

2

∫ ∞√εb2

2r

N0e−

r2

N0 dr

≤ 1

2Q

(√εb

2N0

)+

1

2e−

εb4N0 .

82

Page 83: תרגילים היילרן

References[1] J. G. Proakis, Digital Communications, 4th Edition, John Wiley and Sons, 2000.

[2] S. Haykin, Communication Systems, 4th Edition, John Wiley and Sons, 2000.

[3] A. Goldsmith, Wireless Communications, Cambridge University Press, 2006.

[4] J. G. Proakis and M. Salehi, Communication Systems Engineering, 2nd Edition, Prentice-HallInc., 2002.

83