1/30
Spectrum Estimation
Lim & Oppenheim: Ch. 2Hayes: Ch. 8
Proakis & Manolakis Ch. 14
2/30
Introduction & Issues
3/30
Recall Definition of PSDGiven a WSS random process x[k] the PSD is defined by:
<< Warning: Ch. 2 of L&O uses “ω” for the DT frequency whereas Porat uses “Ω”. Also, Hayes expresses DTFTs (and therefore PSDs) in terms of ejω; it means the same thing – it is just a matter of notation. Hayes’ notation is more precise when you consider going from the ZT H(z) to the frequency response H(ejω) = H(z)|z=ejω >>
Recall the Wiener-Khinchine Theorem:
⎪⎭
⎪⎬⎫
⎪⎩
⎪⎨⎧
= ∑−=
−+∞→
2
121 ][lim)(
M
Mn
njMM
x enxES ωω ( )
{ }
∑∞
−∞=
−=
=
k
kjx
xx
ekr
krS
ω
ω
][
][)( F
][ RPof ACF
* ]}[][{][nx
x knxnxEkr +=
( )
LO-2.1
4/30
Problem of PSD Estimation1. Both ( ) & ( ) involve ensemble averaging BUT in
practice we get only one realization from the ensemble2. Both ( ) & ( ) use a Fourier transform of infinite length
BUT in practice we get only a finite number of samples.(Note: a finite # of samples allows only a finite # of ACF values)
( ) & ( ) motivate two approaches to PSD estimation:1. Compute the DFT of the signal and then do some form of
averaging2. Compute and estimate of the ACF using some form of
averaging and then compute the DFT
Both of these approaches are called “Classical” Nonparametric Approaches – they strive to do the best with the available data w/o making any assumptions other than that the underlying process is WSS.
LO-2.2
5/30
The “Modern” Parametric ApproachThere is a so-called “Modern” approach to PSD estimation that
tries to deal with the issue of having only a finite # of samples:
Assume a recursive model for the ACF Allows recursive extension of ACF using the known values
Example Model
We’ll see that for this approach all we’ll need to do is estimate the model parameters {ai} and then use them to get an estimate of the PSD …. Thus, this approach is called “Parametric”
1],[]2[]1[][ 21 +≥−−−−−−−= pkpkrakrakrakr xpxxx
LO-2.2
6/30
Review of StatisticsBefore we can really address the issue of estimating a PSD we need to review a few issues from statistics.
What are we doing in PSD Estimation?Given: Finite # of samples from one realizationGet: Something that “resembles” the PSD of the process
x(t)
t
t
t
)(ˆ ωxS
ω
ω
ω
PSD
Estimation
PSD
Estimation
PSD
Estimation
Each Signal Realizationgives a differentPSD Estimate Realization
Each PSD Estimate is a Realization of a Random Process
LO-2.3
7/30
Review of Statistics (Cont.)Thus… must view PSD Estimate as a Random Process
Need to characterize its mean and variance:• Want Mean of PSD Estimate = true PSD• Want Var of PSD Estimate= “small”
To make things easier to discuss, we use a slightly different estimation problem to illustrate the ideas… Consider the process
][][ nwAnx +=Constant AWGN, zero-mean, σ2
Given a finite set of data samples x[0],… x[N-1]…. estimate A.Reasonable estimate is:
For each realization of x[n] you get a different value for the estimate of A.
∑−
==
1
0][1ˆ
N
nnx
NA “sample mean”
8/30
Review of Statistics (Cont.)We want two things for the estimate:1. We want our estimate to be “correct on average”: AAE =}ˆ{
If this is true, we say the estimate is unbiased. (Ch. 2 of L&O shows that the sample mean is unbiased)
If it is not true then we say the estimate is biased.If it is not true, but
we say that the estimate is asymptotically unbiased.
2. We want small fluctuations from estimate to estimate:
AAEN
=∞→
}ˆ{lim
smallA =}ˆvar{
Also, we’d like(Ch. 2 of L&O shows that this is true for the sample mean)
∞→→ NasA 0}ˆvar{
9/30
Review of Statistics (Cont.)Can capture both mean and variance of an estimate by using Mean-Square-Error (MSE):
}ˆ{}ˆ{
}ˆ{}ˆvar{}ˆ{ 2
AEAABwhere
ABAAMSE
−=
+=
Usual goal of Estimation: Minimize MSE• Minimize Bias• Minimize Variance
For PSD Estimation want:
smallS
SSE
x
xx
=
=
)}(ˆvar{
)()}(ˆ{
ω
ωω
10/30
Non-Parametric Spectral EstimationH-8.2, LO-2.4
• Periodogram• Windowed Periodogram• Averaged Periodogram• Windowed & Averaged Periodogram• Blackman-Tukey Method• Minimum Variance Method
11/30
Family of Non-Parametric Methods
“Classical” Methods(FT-Based Methods)
“Non-Classical” Methods(Non-FT-Based Methods)
Periodogram-Based• Periodogram• Modified• Bartlett• Welch
ACF-Est.-Based• Blackman-Tukey
Filter Bank View• Minimum Variance
12/30
Family of “Classical” Methods
Periodogram
ModifiedPeriodogram
(Use Window)
Bartlett’s Method(Average
Periodograms)
Welch’s Method(Average
Windowed Periodograms)
Blackman-Tukey
⎪⎭
⎪⎬⎫
⎪⎩
⎪⎨⎧
= ∑−=
−+∞→
2
121 ][lim)(
M
Mn
njMM
x enxES ωω ∑∞
−∞=
−=k
kjxx ekrS ωω ][)(
13/30
The Periodogram
14/30
Periodogram - Definition
In practice we have one set of finite-duration data.Two Practical Problems:1. Can’t do the expected value2. Can’t do the limit
The periodogram is a method that ignores them both!!!
⎪⎭
⎪⎬⎫
⎪⎩
⎪⎨⎧
= ∑−=
−+∞→
2
121 ][lim)(
M
Mn
njMM
x enxES ωωBased on:
21
0
1 ][)(ˆ ∑−
=
−=N
n
njNPER enxS ωω
In practice we compute this using the DFT (possibly using zero-padding) – which computes the DTFT at discrete frequency points (“DFT Bins”)
H-8.2.1
15/30
Periodogram - ComputationIn practice we compute this using the DFT(FFT) (usually using zero-padding) – which computes the DTFT at discrete frequency points (“DFT Bins”):
21
0
/21 ][][ˆ ∑−
=
−=N
n
NnkjNPER
zenxkS πzk Nk /2πω =
N = number of signal samplesNz = DFT size – after zero-padding
Zero-Padto Nz
Signal Samples
x[0]
x[N-1]
DFT(FFT) |•|2
1/N
][ˆ kSPER
16/30
Periodogram – Viewed as Filter BankAlthough we ALWAYS implement the periodogram using the DFT, it is helpful to interpret it as a filter bank.
Define the impulse response of an FIR filter as:
⎪⎩
⎪⎨⎧ <≤
=otherwise
NneNnh
ijn
i,0
0,1][
ω
Frequency Response of this filter is:
]2/)sin[(]2/)(sin[
][)(
2/)1)((
1
0
i
iNjn
N
n
jni
NNe
enhH
i
ωωωω
ω
ωω
ω
−−
=
=
−−−
−
=
−∑
<< See Figure 8.3 in Hayes>>
17/30
Periodogram – Viewed as Filter Bank (cont.)Now the output of the ith filter is:
∑
∑
+−=
−
+−=
=
−==
n
Nnk
knj
n
Nnkiii
iekxN
knhkxnhnxny
1
)(
1
][1
][][][*][][
ω
Now one estimate of the power at the output of this filter is:for any value of n. Choosing n = N-1 gives the periodogram:
2|][| nyi
)(ˆ][1
][1][1]1[
21
0
21
0
2
1
)1(21
0
)1(2
ωω
ωωω
PER
N
k
kj
N
k
kjNjN
k
kNji
SNekxN
ekxN
eekxN
Ny
i
iii
==
==−
∑
∑∑
−
=
−
−
=
−
=
−−
=
−−
<< See Figure 8.4 in Hayes>>
18/30
Periodogram – PerformanceFor a good PSD estimate we’d like to have (at the very least):
0)}(ˆvar{lim
)()}(ˆ{lim
=
=
∞→
∞→
ω
ωω
xN
xxN
S
SSE
Does the Periodogram have these characteristics????
Let’s Find Out!!!
“Asymp. UnBiased”
Actually, we would prefer it to be unbiased
even for finite N
H-8.2.2
19/30
Periodogram – Performance: BiasProperty #1: The Periodogram is Biased.Property #2: But… The Periodogram is Asymptotically Unbiased.
Proof: Taking the EV of the periodogram gives
{ }
)()(*)(21
][1
][
][][
][)(ˆ
1
)1(][
1
0
1
0
)(1
1
0
*1
0
1
21
0
1
ωωωπ
ω
ω
ω
ωω
ω
xBcirc
x
N
Nk
kjx
kw
N
n
N
m
mnjxN
N
m
mjN
n
njN
N
n
njNPER
SWS
ekrNk
emnr
emxenxE
enxESE
B
≠=
⎟⎟⎠
⎞⎜⎜⎝
⎛−=
−=
⎪⎭
⎪⎬⎫
⎪⎩
⎪⎨⎧
⎥⎥⎦
⎤
⎢⎢⎣
⎡
⎥⎥⎦
⎤
⎢⎢⎣
⎡=
⎪⎭
⎪⎬⎫
⎪⎩
⎪⎨⎧
=
∑
∑∑
∑∑
∑
−
−−=
−
−
=
−
=
−−
−
=
−
=
−
−
=
−
“Sum On Diagonals”rx[n-m] is constant on
each diagonal
Bartlett (Triangle) Window
20/30
Periodogram – Performance: Bias (cont.)Proof (cont.): This shows that the Periodogram is Biased. The bias comes from the smoothing effect of Bartlett window.(Smoothing also reduces the resolution of sharp spectral features).
{ }
)(][
][1lim)(ˆlim1
)1(1
ω
ω
ω
ω
xk
kjx
N
Nk
kjx
NPER
N
Sekr
ekrNk
SE
==
⎟⎟⎠
⎞⎜⎜⎝
⎛−=
∑
∑
∞
−∞=
−
−
−−=
−
→
∞→∞→
Thus, the Periodogram is Asymptotically Unbiased.
But… as N→∞ the Bartlett Kernel tends to a delta function in the frequency domain, or – equivalently – in the TD the Bartlett window tends 1:
{ } )(*)(21)(ˆ ωωπ
ω Bcirc
xPER WSSE =
21/30
Periodogram – Performance: VarianceProperty #3: The variance of the Periodogram does NOT (in general) tend to zero as N→∞.
Proof: Difficult to prove for general case… so this is proved under the assumption: complex-valued white Gaussian process w/ zero mean and variance σ2. Under this assumption, the true PSD and ACF are:
][][&,)( 22 kkrS x δσωσω =∀=
The variance of the periodogram is what we want to analyze and is given by:
{ } { } { }[ ]22 )(ˆ)(ˆ)(ˆvar ωωω PERPERPER SESES −=
Look at this term first: “Bias Term”
22/30
Periodogram – Performance: Variance (cont.)Proof (cont.): So from our previous analysis of bias (and our assumptions on the process) we know that the second term is:
{ }[ ]
[ ]4
2
0
42
1
)1(
2
21
)1(
2
1][1
][1)(ˆ
σ
σδσ
ω
ωω
ω
=
⎥⎥⎦
⎤
⎢⎢⎣
⎡⎟⎟⎠
⎞⎜⎜⎝
⎛−=
⎥⎥⎦
⎤
⎢⎢⎣
⎡
⎟⎟⎠
⎞⎜⎜⎝
⎛−=
⎥⎥⎦
⎤
⎢⎢⎣
⎡
⎟⎟⎠
⎞⎜⎜⎝
⎛−=
=
−−
−−=
−
−
−−=
−
∑
∑
k
kjN
Nk
kj
N
Nk
kjxPER
eNk
ekNk
ekrNk
SE
(Aside: under our assumptions the periodogram is unbiased!)
{ } { } 42 )(ˆ)(ˆvar σωω −= PERPER SES
So the variance of periodogram is now…
Now.. Look at This Term
( )
23/30
Periodogram – Performance: Variance (cont.)Proof (cont.): As a means of looking at this first term we consider:
{ }⎪⎪
⎭
⎪⎪
⎬
⎫
⎪⎪
⎩
⎪⎪
⎨
⎧
⎥⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢⎢
⎣
⎡
⎥⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢⎢
⎣
⎡
= ∑∑−
=
−−
=
−21
0
121
0
121
21 ][][)(ˆ)(ˆN
n
njN
N
l
ljNPERPER enxelxESSE ωωωω
*1
0
1
0
11 ][][⎥⎥⎦
⎤
⎢⎢⎣
⎡
⎥⎥⎦
⎤
⎢⎢⎣
⎡∑∑−
=
−−
=
−N
l
ljN
k
kj elxekx ωω*1
0
1
0
22 ][][⎥⎥⎦
⎤
⎢⎢⎣
⎡
⎥⎥⎦
⎤
⎢⎢⎣
⎡∑∑−
=
−−
=
−N
n
njN
m
mj enxemx ωω
Using these “call-outs” and manipulating gives:
{ }{ } ]})()[(exp{][][][][1
)(ˆ)(ˆ
21
1
0
1
0
1
0
1
0
**2
21
ωω
ωω
nmlkjnxmxlxkxEN
SSEN
k
N
l
N
m
N
n
PERPER
−+−−= ∑ ∑ ∑ ∑−
=
−
=
−
=
−
=
= ?????
( )
24/30
Periodogram – Performance: Variance (cont.)Proof (cont.): Now what is this Expected Value???? Well…since we assumed the process is Gaussian we can use a standard result for complex JOINTLY Gaussian RVs:
{ } { } { } { } { }][][][][][][][][][][][][ ****** lxmxEnxkxEnxmxElxkxEnxmxlxkxE +=
Now… using this result together with the assumption of whiteness:
{ }[ ]][][][][
][][][][4 lmnknmlk
mxnxkxlxE
−−+−−= δδδδσ
Now… using this result in ( ) gives:
25/30
Periodogram – Performance: Variance (cont.)
{ }
⎥⎥⎦
⎤
⎢⎢⎣
⎡−−−+=
⎥⎥⎦
⎤
⎢⎢⎣
⎡−−−−+=
⎥⎥⎦
⎤−+−−−−+
⎢⎢⎣
⎡−+−−−−=
∑∑∑∑
∑∑∑∑
∑∑∑∑
∑∑∑∑
−
=
−
=
−
=
−
=
−
=
−
=
−
=
−
=
−
=
−
=
−
=
−
=
−
=
−
=
−
=
−
=
1
0
1
021
1
0
1
02
4
1
0
1
021
1
0
1
02
4
21
1
0
1
0
1
0
1
0
21
1
0
1
0
1
0
1
02
4
21
)]})([(exp{1
)]}()([exp{1
)]}()([exp{][][
)]}()([exp{][][
)(ˆ)(ˆ
N
l
N
k
N
l
N
n
N
l
N
k
N
l
N
n
N
l
N
k
N
n
N
m
N
l
N
k
N
n
N
m
PERPER
lkjN
lklkjN
nmlkjlmnk
nmlkjnmlkN
SSE
ωωσ
ωωσ
ωωδδ
ωωδδσ
ωω
= N2 Use “Sum On Diagonals” Trick
26/30
Periodogram – Performance: Variance (cont.)
{ }
⎥⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢⎢
⎣
⎡
−−⎟⎟⎠
⎞⎜⎜⎝
⎛−+= ∑
−
−−=
1
)1(21
][
22
4
21
]})[(exp{1
)(ˆ)(ˆ
N
Nkkw
PERPER
kjNk
NNN
SSE
B
ωωσ
ωω
Proof (cont.):
Now the FT of the Bartlett window is:So using it in the above result gives:
2
)2/sin()2/sin(]}[{ ⎟⎟⎠
⎞⎜⎜⎝
⎛=
ωωNkwBF
{ }⎥⎥
⎦
⎤
⎢⎢
⎣
⎡
⎟⎟⎠
⎞⎜⎜⎝
⎛−−
+=2
21
21421 ]2/)sin[(
]2/)(sin[1)(ˆ)(ˆωωωωσωω
NNSSE PERPER
27/30
Periodogram – Performance: Variance (cont.)Proof (cont.): To find the first term in the variance expression of interest ( ) we must set ω = ω1= ω2 in the above expressionto get:
{ } 42 2)(ˆ σω =PERSE
Now using this in the expression for variance ( ) gives
{ } { }44
42
2
)(ˆ)(ˆvar
σσ
σωω
−=
−= PERPER SES
{ } 4)(ˆvar σω =PERS
…which DOES NOT go to zero as N→∞
28/30
Periodogram – Performance: CovarianceProperty #4: Increasing N leads to rapidly fluctuating periodograms (even where the true PSD is smooth).
“Proof”: Use the previous results, the covariance of the periodogram is given by
{ } { } { } { }
2
21
214
42
21
214
212121
]2/)sin[(]2/)(sin[
]2/)sin[(]2/)(sin[1
)(ˆ)(ˆ)(ˆ)(ˆ)(ˆ)(ˆcov
⎟⎟⎠
⎞⎜⎜⎝
⎛−−
=
−⎥⎥
⎦
⎤
⎢⎢
⎣
⎡
⎟⎟⎠
⎞⎜⎜⎝
⎛−−
+=
−=
ωωωωσ
σωωωωσ
ωωωωωω
NN
NN
SESESSESS PERPERPERPERPERPER
Covariance is a measure of how correlated two RVs are. Thus, cov(X,Y)=0 indicates that there is a high probability that X & Y will be very unalike.Now, the equation above indicates there are (ω1,ω2) pairs for which the cov of the periodogram is zero.
Periodogram Fluctuates Rapidly from freq-to-freq
29/30
Periodogram – Performance for Non-White RP
{ } )()(ˆvar 2 ωω xPER SS ≈
The above analysis was done for white noise. Hayes p. 407 gives an argument that shows similar results for the non-white case:
{ }⎥⎥
⎦
⎤
⎢⎢
⎣
⎡
⎟⎟⎠
⎞⎜⎜⎝
⎛−−
+≈2
21
212121 ]2/)sin[(
]2/)(sin[1)()()(ˆ)(ˆωωωωωωωω
NNSSSSE xxPERPER
{ }2
21
212121 ]2/)sin[(
]2/)(sin[)()()(ˆ)(ˆcov ⎟⎟⎠
⎞⎜⎜⎝
⎛−−
≈ωωωωωωωω
NNSSSS xxPERPER
30/30
Periodogram – Examples
1. Bias – Effect of WindowPeriodogram of Sinusoid: See Hayes Fig. 8.5
2. Variance and CovariancePeriodogram of White Noise: See L&O Fig. 2.4
Periodogram of Sinusoid: See Hayes Fig. 8.6
3. Resolution – Effect of WindowPeriodogram of 2 Sinusoids: See Hayes Fig. 8.8