Upload
bambang-joko-widodo
View
222
Download
0
Embed Size (px)
Citation preview
8/8/2019 TLSK1_8_RLS
1/11
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 1
Communication Signal Processing I:
8. Recursive Least SquaresAlgorithm
Markku Juntti
Overview Kalman filtering is briefly reviewed. The
method of least squares is modified to a recursiveform suitable for adaptive filtering applications. Itsproperties are then evaluated.
Source The material is mainly based on Chapters 7
and 13 of the course book [1] (Chapters 9 and 10 of[1A]).
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 2
Course Contents
1. Introduction
Part I Background2. Optimum receiver design problem and equalization3. Mathematical tools
Part II Linear and Adaptive Filters and Equalizers4. Optimum linear filters5. Matrix algorithms6. Stochastic gradient and LMS algorithms7. Method of least squares8. Recursive least squares algorithm9. Rotations and reflections10. Square-root and order recursive adaptive filters
Part III Nonlinear Equalizers11. Decision-directed equalization12. Iterative joint equalization and decoding
Part IV Other Applications13. Spectrum estimation14. Array processing15. Summary
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 3
Contents
Review of last lecture Review of Kalman filters
Kalman filtering problem
Innovations process State estimation by the innovations process Summary of Kalman filtering Summary and discussion
Introduction to RLS algorithm Matrix inversion lemma Exponentially weighted RLS algorithm Weighted error squares Convergence analysis Application example equalization
Relation to Kalman filter Summary
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 4
Review of Last Lecture
Method of least squares (LS): no statisticalassumptions on observations (data).
an alternative to the Wiener filter theory. Minimize the squares of modeling errors.
The least squares estimate is model-dependent block-by-block method.
the BLUE method.
the MVUE method for Gaussian signals.
Robust computation can be based on singular valuedecomposition (SVD) of the data matrix to calculatethe pseudoinverse.
8/8/2019 TLSK1_8_RLS
2/11
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 5
Multiple Linear Regression Model
Assume an unknown
underlying model to beestimated with u(i) andd(i) known.
Estimation errorise(i)=d(i)y(i), where
Error:
Sum of error squares:
( ) ( ).1
0
=
M
k=k kiuwiy
( ) ( ) ( ).1
0
=
M
k= kkiuwidie
( ) ( ) .,,,2
1
2110
= =
i
iiM iewww E
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 6
Principle of Orthogonality
The error signal:
The cost function (the sum of error squares):
Principle of orthogonality with time average:
The filter output provides the linear LS estimateofthe reponse d(i).
( ) ( ) ( ) ( ).,,,2
110 =
= ==
N
Mi
N
MiM ieieiewww E
( ) ( ) ( ) ( ) ( ).H1
0
iidkiuwidieM
k=k uw==
( ) ( ) .1,,1,0,0min ===
MkiekiuN
Mi
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 7
Matrix Formulation of the NormalEquations
== rnonsingulaisif, 1 zwzw
+
=
)1(
)1(
)0(
)1,1()1,1()1,0(
)1,1()1,1()1,0(
)0,1()0,1()0,0(
1
1
0
Mz
z
z
w
w
w
MMMM
M
M
M
Time averaged (auto)correlation matrix Time averagedcrosscorrelation
vector( ) ( ) ( )H ,N
i M
i i i=
= u u u
( )1H H
=w A A A d
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 8
Review of Kalman Filters
Wiener filters are optimal for stationaryenvironments.
Kalman filters enable efficient recursive computationbased on state-space model.
Kalman filters are optimal in MMSE sense fornonstationary environments described by a state-space model.
Related (similar) to recursive least squares (RLS)adaptive filtering algorithms.
Summary herein, details in Statistical SignalProcessing.
8/8/2019 TLSK1_8_RLS
3/11
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 9
Kalman Filtering Problem
( )nx 1z
( )nn ,1+F
( )1+nx( )n1v ( )nC
( )n2v
( )ny
ProcessObservation
Process equation: Special case: time-invariant system:
Measurement equation:
Problem: Find the MMSE estimates of the statex(i),1in, using all the observationsy(i), 1in.
( ) ( ) ( ) ( ).,11 1 nnnnn vxFx ++=+
( ) ( ) ( ) ( ).2 nnnn vxCy +=
processnoise
statevector
state transition m
atrix
measurementnoise
observationvector
( ) ( ).,1 nnn FF =+
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 10
Innovations Process
The MMSE estimate of the observed datay(n):where is Y
n1is the space spanned by the
observationsy(i), 1in1.
The estimation error process
is innovations process, since by orthogonalityprinciple:
1.
2.
3.
( ), 1nnYy
( ) ( ) ( ) ,2,1, 1 == nnnn nYyy
( ) ( )[ ] ,11,H = nkkn 0y
( ) ( )[ ] ,11,H = nkkn 0
( ) ( ) ( )[ ] ( ) ( ) ( )[ ] .,,2,1,,2,1 nn yyy
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 11
Correlation matrix:
where the predicted state-error correlation matrix is
and the predicted state-error:
Correlation Matrix of the InnovationsProcess
( ) ( ) ( ). 1= nnnn Yxx
( ) ( ) ( )[ ] ( ) ( ) ( ) ( ),1,E 2 nnnnnnnn QCKCR +==
( ) ( ) ( )[ ]nnn 222 E vvQ =
( ) ( ) ( )[ ] ,1,1,E1, = nnnnnn K
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 12
State Estimation by the InnovationsProcess
Remind that
The MMSE state estimate can be expressed a
Recursive estimate update:
( ) ( ) ( )[ ] ( ) ( ) ( )[ ] .,,2,1,,2,1 nn yyy
( ) ( ) ( )
.1==
n
kin kki x Y
( ) ( ) ( ) ( ) ( ).,11 1 nnnnnn nn GxFx ++=+ YY
correction term
Kalman gain innovation
8/8/2019 TLSK1_8_RLS
4/11
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 13
Recursive One-Step Predictor
( ) ( ) ( ) ( ) ( ).,11 1 nnnnnn nn GxFx ++=+ YY
( )nG 1z
( )nn ,1+ F
( )ny +
( ) nC
( )n ( )1 nnYx( )nn Y1 +x
Model of the dynamic system
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 14
Kalman Gain
The Kalman gain matrix
can also be computed recursively:
( ) ( ) ( )[ ] ( )nnnn 11E += RxG
( ) ( ) ( ) ( ) ( ).1,,1 1 nnnnnnn += RCKFG
( )nC
( ) 1
( )1, nnK
( )n1R
Computation
ofR-1
(n)
( ) + nn ,1F ( )nG
( ) nC
( )n2Q
( )n1R
pre-multiplication post-multiplication
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 15
Riccati Equation
The the predicted state-error correlation matrix
K(n, n-1) can also be computed recursively:
( ) ( ) ( ) ( ) ( ),,1,1,1 1H
nnnnnnnn QFKFK +++=+
( )nC ( ) + 1,nnF ( )nG
( )n1
Q
( ) ( ) ( ) ( ) ( ) ( ).1,1,1, += nnnnnnnnn KCGFKK
1z +
( )1, nnK( ) + nn ,1F ( )nn ,1+F
( )nn ,1+K
( )nK
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 16
Summary of Kalman Filtering
( )nG
1z
( )1, nnK
( ) +1,nnF
( )nn ,1+K
( )0,1:conditionInitial K
Riccatiequationsolver
Kalmangaincomputer
( )1, nnK
One-steppredictor
( )ny ( )nn Y1 +x( )nnYx
( )01:conditionInitial Yx
8/8/2019 TLSK1_8_RLS
5/11
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 17
Kalman Variables
x(n) state vector at time n(M1)
y(n) observation vector at time n(N1)
F(n+1,n) state transition matrix from time nto n+1 (MM)C(n) measurement matrix at time n(NM)
Q1(n) correlation matrix of process noise vector v1(n) (MM)
Q2(n) correlation matrix of observation noise v2(n) (NN)
predicted estimate of the state vector at time n+1 (M1)
filtered estimate of the state vector at time n+1 (M1)
G(n) Kalman gain matrix at time n(MN)
(n) innovations vector at time n(N1)
R(n) correlation matrix of innovations vector at time n(NN)K(n+1,n) correlation matrix of the error in (MM)
K(n) correlation matrix of the error in (MM)
( )nn Y1 +x( )nnYx
( )nnYx( )nn Y1 +x
Knownparameters
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 18
Kalman Computations
( ) ( ) ( ) ( ) ( )nnnnnn nn GxFx ++=+ 1,11 YY
( ) ( ) ( ) ( ) ( )nnnnnnnn 1H ,1,1,1 QFKFK +++=+
( ) ( ) ( ) ( ) ( ) ( )1,1,1, += nnnnnnnnn KCGFKK
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ]1
2
1,1,,1
++= nnnnnnnnnnn QCKCCKFG
( ) ( ) ( ) ( )1 = nnnnn YxCy
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 19
Summary of Kalman Filtering
Efficient recursive computation based on state-spacemodel.
Optimal in MMSE sense: They minimize the trace ofthe filtered state error correlation matrix K(n).
Widely applied in control systems.
A framework for RLS algorithms in adaptive filtering.
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 20
Variants of the Kalman Filter
Covariance filtering. Studied so far.
Information filtering: propagate K-1(n) (~ Fishers
information matrix) instead ofK(n+1,n). Square-root filtering: propagate the Cholesky
factorization K(n) = K1/2(n)KH/2(n) (covarianceform) or inverse K-1(n) = K-1/2(n)K-H/2(n)(information form). Improved numerical stability.
UD-factorization or fast Kalman algorithm:modification of square-root filtering to reduce
computational complexity. The numerical stability advantage lost.
8/8/2019 TLSK1_8_RLS
6/11
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 21
Extended Kalman Filter
Sometimes the basic system model is nonlinear.
The Kalman filter can be extendedto such a caseas well:1. Linearize the problem approximately by Taylor series.
2. Approximate the state equations:
Kalman filtering still applies except a few
modifications.
( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ).
,,11
2
1
nnnn
nnnnnn
vxCy
dvxFx
+
++++
deterministic (non-random)
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 22
Introduction to RLS Algorithm
The next step is to apply the method of leastsquares to update the tap-weights of adaptive
transversal filters. We search for a recursive least squares(RLS)
algorithm to update the filter tap-weights when newobservations (data, input samples) are fed into thefilter.
More efficient utilization of data than in the LMSalgorithm. Improved convergence.
Increased complexity. Close relationship to Kalman filtering, but RLSalgorithm is treated as on its own.
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 23
Problem SetUp
The cost function to be minimized at time nis
where (n,i) is a weighting or forgetting factor, ande(i) = d(i)y(i) is the error between the desiredresponse d(i) and the transversal filter output y(i).
Remind:
Block processing: taps are fixed over 1in.
( ) ( ) ( ) ,,1
2=
=n
i
ieinnE
( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( )[ ]
( ) ( ) ( ) ( )[ ] .
,11
,
T
110
T
H1
0
nwnwnwn
Miuiuiui
inkiunwiy
M
M
k=k
=
+=
==
w
u
uw
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 24
Exponentially Weighted Least Squares
The weighting factor must satisfy 0
8/8/2019 TLSK1_8_RLS
7/11
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 25
The Impact of the Value of
= 0.999
= 0.99
= 0.98 = 0.97
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 26
Matrix Inversion Lemma
General form: [S. M. Kay, Fundamentals of Stat. Sign. Proc., Prentice Hall, 1993, p. 571]
Textbooks [1] special case:
whereA and B are positive definite MMmatrices.
Another useful special case (Woodburys indentity):
where u is a vector.
( ) ( ) .111111
+=+ DACBDABAABCDA
( ) ,H1H1H11 BCBCCDBCBACCDBA +=+=
( ) ,1 1H1H1
11H
uAu
AuuAAuuA
+=+
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 27
Exponentially Weighted RLSAlgorithm
Apply Woodburys identity to the exponentiallyweighted least squares problem:
Let (for notational convenience) the inversecorrelation matrixbe P(n) = 1(n):
( ) ( ) ( ) ( )
( ) ( )( ) ( ) ( ) ( )
( ) ( ) ( ).
11
111
1
1H
1H1111
H
nnn
nnnnnn
nnnn
uu
uu
uu
+
=
+=
( ) ( ) ( ) ( ) ( )
( ) ( ) ( )( ) ( ) ( )
.11
1
,11
H1
1
H11
nnnnnn
nnnnn
uPuuPk
PukPP
+=
=
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 28
Riccati Equation of the RLS Algorithm
The gain vector can be updated via the Riccatiequationof the RLS algorithm (compare to Kalmanfilter):
( )( ) ( )
( ) ( ) ( )
( ) ( ) ( ) ( ) ( )[ ] ( )( ) ( ) ( ) ( ).
11
11
1
1
H11
H1
1
nnnn
nnnnnn
nnn
nnn
uuP
uPukPk
uPu
uPk
==
=
+
=
8/8/2019 TLSK1_8_RLS
8/11
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 29
Time Update of the Tap-Weight Vector
The tap-weight vector update:
where the a priori estimation error a posterioriestimation error
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ]( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )ndnnnnnnn
ndnnnn
ndnnnnnnnn
+=
+=+===
uPzPukP
uPzP
uzPzPzw
111
1
1
H11
1
( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( )[ ]
( ) ( ) ( ),1
11
11
1111
H
H
H
nnn
nnndnn
ndnnnnn
ndnnnnnnnn
+=
+=
+=
+=
kw
wukw
kwukw
uPzPukzP
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ).1 HH nnndnennndn uwuw ==Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 30
RLS Algorithm Illustrations
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 31
Summary of the RLS Algorithm
( )( ) ( )
( ) ( ) ( )nnn
nnn
uPu
uPk
11
1H1
1
+
=
( ) ( ) ( ) ( )nnndn uw 1H
=
( ) ( ) ( ) ( )nnnn += kww 1
( ) ( ) ( ) ( ) ( )11 H11 = nnnnn PukPP
Typical simple initializations: P(0) = I, ( ) .0 0w =
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 32
Weighted Error Squares
Resume that
where now
Note that
Conversion factor:
( ) ( ) ( ),Hmin nnnd wz=EE
( ) ( ) ( ) ( ) .12
1
2ndnidn d
n
i
ind +==
=
EE
( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )[ ]nnnndnnndnd +++= kwuz 111 HH2min EE( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]1111 HH += nnndndnnnd wuwzE( ) ( ) ( )
nnn kzH
( ) ( ) ( ).1minmin nenn+= EE
( ) ( ) ( ) ( ).nennen =
( ) ( )( )
( ) ( ).1 H nnnnne uk==
8/8/2019 TLSK1_8_RLS
9/11
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 33
Convergence Analysis
Convergence analysishere is rigorous. Direct averaging method
(as in the case of the LMSalgorithm) is not used.
Multiple linear regressionmodel is applied. Regression parameter: wo.
Measurement error: eo(n).
Analysis carried out for=1 or (n,i)=ni=1.
( ) ( ) ( )nnend uwHoo +=Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 34
Mean Value
Similarly to the unbiasedness of the LS estimator,the RLS algorithm is convergent in the mean value:
Proof:
The claim follows from the above by noting that theexpectation of the latter term is zero.
( ) .,E o Mnn = ww
( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )=
==
=
+=+==n
i
n
i
n
i
n
i
neininneiidin1
o1
oH
1
Hoo
1
uwuuuwuuz
( ) ( ) ( )=
+=n
i
nein1
oo uw
( )n=
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )=
=
+=+==n
i
n
i
neinneinnnn1
o1
o1
o1
o1 uwuwzw
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 35
Mean-Squared Tap-Weight Error
Two independence assumptions: The input vectorsu(1),u(2),...,u(n) are IID and jointly Gaussian.
The covariance matrix K(n) = E[(n)H(n)] of thefilter tap-weight error vector
is
Proof: See [1, pp. 576578].
Consequences:
1. MSE is magnified by 1/min.
Ill-conditioned matricescause problems.
2. MSE decreases linearly over time.
( ) ( ) . oww = nn
( ) ( )[ ] .1,E 11
112 +>==
MnnnMn
RK
( ) ( )[ ] ( )[ ] .1,trE1
11
H 2 +>== =
Mnnnn
M
iMn i
K
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 36
Learning Curve The Output MSE
Two kinds of filter output MSE measures: a prioriestimation error (n)
Large value (MSE ofd(1)) at time n=1, then decays.
a posterioriestimation error e(n) Small value at time n=1, then rises.
A prioriestimation error (n) is more descriptive:
Proof: See [1, pp. 578579].
( ) ( )[ ] ( )[ ] .1,1trE'1
222 2 +>+=+==
MnnnnJMn
MRK
8/8/2019 TLSK1_8_RLS
10/11
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 37
Learning Curve The Output MSE:Consequences
1. The learning curve converges in about 2Miterations about an order of magnitude faster than the LMSalgorithm.
2. As the number of iterations approaches infinity MSEapproaches the variance 2 of the optimummeasurement error eo(n) zero excess MSE in WSSenvironments.
3. MSE convergence is independent of the eigenvaluespread of the input data correlation matrix.
Remarkable convergence improvements over LMSalgorithm with the price of increased complexity.
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 38
Application Example Equalization
Transmitted signal: randomsequence of1s.
Channel:
11-tap FIR equalizer.
Two SNR values:
SNR = 30 dB SNR = 10 dB.
( ) .
otherwise
3,2,1,22
cos1
,02
1
=
+= nnWhn
Channel response
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 39
Example: Impact of Eigenvalue Spread atHigh SNR = 30 dB
Convergence inabout 20 (2M)iterations.
Relatively insensitiveto eigenvaluespread.
Clearly fasterconvergence andsmaller steady-stateerror than those ofthe LMS algorithm.
( ) { }46,21,11,6 R
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 40
Example: RLS and LMS AlgorithmComparison at Low SNR = 10 dB
The RLS algorithmhas clearly fasterconvergence andsmaller steady-stateerror than those ofthe LMS algorithmwith less oscillations.
( ) { }11 R
8/8/2019 TLSK1_8_RLS
11/11
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 41
Relation to Kalman Filter
The RLS algorithm has many similarities to theKalman filtering, but also some differences. RLS: derivation by a deterministic mathematical model.
Kalman: derivation by a stochastic mathematical model.
Unified approach based on stochastic state-spacemodels.
The Kalman filtering approaches in the literature arereadily available for RLS algorithms.
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 42
Relations of RLS Algorithms and KalmanFilter Variables
Communication Signal Processing I
8. Recursive Least Squares Algorithm
M. Juntti, Universityof Oulu, Dept. Electricaland Inform. Eng.,
Telecomm. Laboratory & CWC 43
Summary
RLS algorithm derived as a natural application ofthe method of least squares to the linear filteradaptation problem. Based on matrix inversion lemma.
Difference to the LMS algorithm: step-sizeparameter is replaced by P(n) = 1(n).
The rate of convergence of the RLS alg. is typically1. an order of magnitude better than that of the LMS alg.
2. invariant to eigenvalue spread.
3. the excess MSE converges to zero
The case 1 is considered later change in the lastproperty.