8
2 JUL Y 2000 Embedded Systems Programming f     e  a  t     u r   e DAN SIM ON riginally designed in the 1960s for aerospace applications, the Kalman filter (“Kalman Filtering,” June 2001, p. 72) is an effective tool for estimating the states of a system. The widespre ad success of the Kalman fi lter in ae rospace ap pli- cati ons led to attempts to app ly it to more comm on indus- trial app lic ations. Ho wever, these attem pts qu ickl y mad e it clear that a serious mismatch existed between the underlying assumptions of Kalman filters and industrial state estimation problems. Accurate system models are not as readily available for industrial prob- lems. In ad diti on, e ngineers rarely understand the statistical nature of the no is e pr ocess es that impin ge on such sy stems. After a decad e or so of reap - praising the nature and role of Kalman filters, engineers realized they needed a new filter that could handle system modeling errors and noise un certainty . S tate estimators that can t olerate such u ncer tainty are called robust  . The H fil ter was designed for ro bu stness. What’s wrong with Kalman filters? Recall that the Kalman filter estimates the states  x of a linear dynamic sys- tem defined b y the eq uations: 2 where  A,  B, and C are known matrices; k is the time in dex;  x is the state of the sy st em (u navail able for m easurem ent) ; u is the known input to the sys-  x Ax Bu w  y Cx z k k k k  k k k + = + + = + 1 From Here to Infinity H filters can be used to estimate system states that cannot be observed directly. In this, they are like Kalman filters. However, only H filters are robust in the face of unpredictable noise sources. O

DanSimon-H_infinityFilters

Embed Size (px)

Citation preview

Page 1: DanSimon-H_infinityFilters

8/8/2019 DanSimon-H_infinityFilters

http://slidepdf.com/reader/full/dansimon-hinfinityfilters 1/8

2 JULY 20 00 Embedded Systems Programming

f

e

a

t

u

r

e

D A N S I M O N

riginally designed in the 1960s for aerospace applications,

the Kalman filter (“Kalman Filtering,” June 2001, p. 72) isan effective tool for estimating the states of a system. Thewidespre ad success of the Kalman filter in ae rospace ap pli-cations led to attempts to app ly it to more comm on indus-trial app lications. Ho wever, these attem pts qu ickly mad e it

clear that a serious mismatch existed between the underlying assumptionsof Kalman filters and industrial state estimation problems.

Accurate system models are not as readily available for industrial prob-lems. In ad dition, e ngineers rarely understand the statistical nature of theno ise pr ocesses that impin ge on such systems. After a decad e or so of reap -praising the nature and role of Kalman filters, engineers realized theyneeded a new filter that could handle system modeling errors and noiseun certainty. State estimators that can t olerate such u ncer tainty are called

robust . The H • filter was designed for ro bu stness.

What’s wrong with Kalman filters?Recall that the Kalman filter estimates the states x of a linear dynamic sys-tem defined b y the eq uations: 2

where A, B, and C are known matrices; k is the time in dex; x is the state of the system (u navailable for m easurem ent) ; u is the known input to the sys-

x Ax Bu w

y Cx zk k k k

k k k

+= + +

= +

1

From Hereto Infinity

H• filters can be used to estimate system states that cannot be observed directly.In this, they are like Kalman filters. However, only H • filters are robust in the

face of unpredictable noise sources.

O

Page 2: DanSimon-H_infinityFilters

8/8/2019 DanSimon-H_infinityFilters

http://slidepdf.com/reader/full/dansimon-hinfinityfilters 2/8

Embedded Systems Programming JULY 20 00 3

tem; y is the measured output; and wan d z are noise. The two equationsrepresent what is called a discrete timesystem , because the time variable k isdefined on ly at discrete values (0, 1, 2,…). Suppose we want to estimate thestate x of th e abo ve system. We can no tmeasur e th e state directly; we can o nlymeasure y directly. In this case we canuse a Kalman filter to estimate thestate. If we refer to our estimate as ,

the Kalman filter equations are givenas follows:

where Sw an d S z are the covariancematrices of w an d z, K is the Kalmangain, an d P is the variance of the esti-mation error. The Kalman filter workswell, but only under certain condi-

tions.First, the noise processes need to

be zero mean. The average value of the process noise, wk , must be zero,and the average value of the m easure-ment noise, zk , must also be zero. Thiszero mean property must hold notonly across the entire time history of the process, but at each time instant,as well. That is, the expected value of w an d z at each time instant must beequal to zero.

Second , we n eed to know the stan-

dard deviation of the noise processes.The Kalman filter uses the Sw an d S z

matrices as design parameters (theseare the covariance matrices of thenoise processes). That means that if we do not know Sw an d S z, we cannotdesign an appropriate Kalman filter.

The attractiveness of the Kalmanfilter is that it results in the smallestpossible standard deviation of the esti-mation error. In other words, the

Kalman filter is the min imum varianceestimator.

So what do we d o if the Kalman fil-ter assumptions are not satisfied?What should we do if the noiseprocesses are not zero mean, or wedon’t have any information about thenoise statistics? And what if we want tominimize the worst case estimationerror rather than the variance of theestimation error ?

Perhaps we could use the Kalmanfilter anyway, even though its assump-tions are not satisfied, and hope forthe best. Maybe if we ignore the prob-lems we encounter they will go away.This is a common solution to ourKalman filter quandary and it worksreasonably well in many cases.However, another option is available:the H • filter, also called the minimaxfilter. The H • filter does not m ake anyassumptions about the noise, and itminimizes the worst case estimation

error. Before we can attack the H • fil-ter problem, we need to cover a cou-ple of simple mathematical prelimi-naries.

EigenvaluesIf we h ave a squar e matr ix D, then theeigenvalues o f D are d efined as all val-ues of l that satisfy the equation

Dg= l g for some vector g. It turns outthat if D is an n n matrix, then thereare always n values of l that satisfy thisequation; that is, D has n eigenvalues.

For instance, consider the following Dmatrix:

Th e eigenvalues of this matrix are 1and 3. In other words, the two solu-tions to the Dg= l g equation are as fol-lows: 3

Eigenvalues are difficult to com-pute in general. But if you have a 2 2matrix of the following form:

then it becomes much easier. The twoeigenvalues of th is matrix are:

In th e examp le that we will consider inthis article, we will need to computethe eigenvalues of a matrix that hasthe form of the D matrix above.

That’s all there is to eigenvalues.It’s a high-tech word that is conceptu-ally simp le.

Vector normsBefore we can look at the H • filterproblem we need to define the normof a vector. A vector n orm is define d asfollows: 4

In order to avoid the square rootsign in the rest of this article, we willuse the square of the norm:

So, for examp le, if the vector x is givenas:

then the n orm squared of x is derivedas follows:

x =

1

2

3

x x xT 2

=

x x xT =

= + ± +( ) + ( )d d d d d d d 1 3 1 3

2

22

1 34 2 /

Dd d

d d = 1 2

2 3

2 1

1 2

1

11

1

1=

D =2 1

1 2

K AP C CP C S

x Ax Bu K y Cx

P AP A S AP C S CP A

k k T

k T

z

k k k k k k

k k T

w k T

z k T

= +( )= +( )+ ( )= +

+ +

+

1

1 1

11

ˆ ˆ ˆ

ˆ x

Page 3: DanSimon-H_infinityFilters

8/8/2019 DanSimon-H_infinityFilters

http://slidepdf.com/reader/full/dansimon-hinfinityfilters 3/8

A weighted vector norm is slightlymore complicated, and is defined as:

where Q is any matr ix with com patibledimensions. So, for example, supposewe have the vector x given previously,and th e matrix Q is given as:

This Q matrix is called a diagonalmatrix. That means that only theentries along the main diagonal arenonzero. In this case, the Q-weightednorm of x is derived as:

We see that the elements of Q serve toweight the elements of x when thenorm is computed; hence the termweighted no rm. In gen eral, Q could bea no nd iagonal matrix, but in this arti-cle we can assum e th at Q is diagonal.

H• filteringNow that we know what a weightedvector no rm is, we can state the H • fil-ter problem and solution. First of all,supp ose we have a line ar d ynam ic sys-tem as defined at the beginn ing of this

article. Then supp ose we want to solvethe following problem:

where J is some measure of how goodour estimator is. We can view the no iseterms w an d v as adversaries that try toworsen ou r estimate. Think of w an d vas manifestations of Murphy’s Law:they will be the worst possible values.So, given th e worst possible values of wan d v, we want to find a state estimate

that will minimize the worst possibleeffect that w an d v have on our esti-mation erro r. This is the prob lem thatthe H • filter tries to solve. For this rea-son, the H • filter is sometimes calledthe minimax filter; that is, it tries tominimize the maximum estimationerror. We will define the function J asfollows:

min maxˆ , x w v J

x 2 1 2 3

2 0 0

0 4 0

0 0 5

1

2

3

2 1 1 4 2 2 5 3 3 63

= [ ]

= + + =

Q =

2 0 0

0 4 0

0 0 5

x x QxQT 2 =

x2 1 2 3

1

2

3

1 1 2 2 3 3 14

= [ ]

= + + =

4 OCTOBER 20 01 Embedded Systems Programming

h - i nf i ni t

y f i l t

e r

s

FIGURE 1 Velocity estimation error

FIGURE 2 Position estimation error

0.8

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

V e

l o c i t y

E r r o r

M a g n

i t u d e

( f e e t / s e c )

0 10 20 30Time (sec)

40 50 60

H∞ FilterKalman Filter

-2

-1

0

1

2

3

4

5

P o s i t i o n

E r r o r

( f e e t )

0 10 20 30Time (sec)

40 50 60

H∞ FilterKalman Filter

Page 4: DanSimon-H_infinityFilters

8/8/2019 DanSimon-H_infinityFilters

http://slidepdf.com/reader/full/dansimon-hinfinityfilters 4/8

where the averages are taken over alltime samples k . In other words, wewant to find an that minimizes J , sowe want to find an that is as close to

x as possible. Murphy wants to maxi-mize J , so Murphy wants to find noisesequences w an d v that cause our esti-mate to be far from the true state x.Ho wever, Murp hy could make farfrom x simp ly by making th e n oise verylarge. We prevent this by putting theaverage of the weighted norms of wan d v in the denominator of J . That

way, Murp hy has to find no ise ter msthat will make J large without simplyusing br ute force.

The previous equation is the state-ment of the H • filtering p roblem. Ou rtask is to find a state estimate thatmakes J small even while Murphy isfind ing no ise terms that make J large.Th e Q, W , and V matrices that are usedin the weighted n orms in J are chosenby us, the designers, to obtain desiredtrade-offs. For example, if we knowthat the w noise will be smaller than

th e v noise, we should make the W matrix smaller th an th e V matrix. Thiswill de-emphasize the importance of th e w noise relative to the v noise.Similarly, if we are more concernedabout estimation accuracy in specificelemen ts of the state vector x, or if theelements of the state vector x arescaled so that they differ by an orderof magnitude or mo re, then we shou lddefine the Q matrix accordin gly.

Well, it turns out that the estima-tion problem is too difficult to solve

mathematically. However, we can solvea related problem. We can solve theproblem:

where g is some constant n umbe r cho-sen by us, the designers. That is, wecan find a state estimate so that themaximum value of J is always less than

, regard less of the values of the n oise

terms w an d v. The state estimate thatforces J < 1/ g is given as follows:

These are the H • filter equations.They have the same form as theKalman filter equations, but thedetails are different. I is the identitymatrix. This means it is a matrix con-sisting en tirely of zeros, except for themain diagonal, which is made up of ones. For instance, a 3 3 identitymatr ix wou ld look like th is:

K k is the H • gain matr ix. Th e initialstate estimate shou ld be initializedto ou r best guess of x0, and the initialvalue P0 should be set to give accept-able filter per formance. In gen eral, P0

should be small if we are highly confi-den t of ou r initial state estimate . If

we use these H • filter equations, weguarantee that J < 1/ g . That meansthat no matter what Murphy does withthe noise terms w an d v, the ratio of the estimation error to the noise willalways be less than 1/ g .

Well then, let’s just set g to a very

large number, say a zillion or so! Thatway we can guarantee that our estima-tion er ror is almost zero , since 1/ g willbe alm ost zero. Well, we can ’t qu ite dothat. The mathematical derivation of the H • equations is valid only if g ischosen such that all of the eigenvaluesof the P matrix have magnitudes lessthan one. If we choo se too large of a g ,a solution to the H • filtering problemdoes not exist. That is, we cannot findan estimator that will make the estima-tion error arbitrarily small.

Matrix inversion!?The observant r eader will notice that amatrix inversion is required at everytime step, in order to compute the Lk

terms. As discussed in the Kalman fil-ter ar ticle, the in version o f small matri-ces is fairly easy, but the inversion of amatrix larger than 3 3 could requiretoo much computational time for apr actical implem en tation. Fortun ately,ˆ x0

ˆ x0

I =

1 0 0

0 1 0

0 0 1

L I QP C V CP

K AP L C V

x Ax Bu K y Cx

P AP L A W

k k T

k

k k k T

k k k k k k

k k k T

= +( )=

= + + ( )= +

+

+

1 1

1

1

1

ˆ ˆ ˆ

ˆ x

J < 1

ˆ x

ˆ x

ˆ x

ˆ x

J ave x x

ave w ave vk k Q

k W k V

=+

ˆ

Embedded Systems Programming OCTOBER 2001 5

FIGURE 3 Technical art specs.

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

K(2)

H Infinity Gains

K(1)

Page 5: DanSimon-H_infinityFilters

8/8/2019 DanSimon-H_infinityFilters

http://slidepdf.com/reader/full/dansimon-hinfinityfilters 5/8

this problem is not insurmountable.Notice from the H • filter equationsthat the calculation of Lk , Pk , and K k

can be performed off line. That is, wedo not need the measurements tocompute the H • gain matrix K k . Wecan compute K k off line ( on our devel-opment system, for example) andthen hard-code K k into our embeddedsystem.

This solution, however, just shiftsthe pr oblem to anoth er arena. Now,instead of being strapped for throu gh-pu t, we are ou t of mem or y. If we tr y tostore a K k matrix in our embed ded sys-tem for each time step of our programexecution, we will quickly run out of

memory. But again, a solution exists.When we compute the K k matr ices off line, we notice that they very quicklyapproach what is called a steady statesolution . Although K k is written as afunction of the time step k , the matri-ces K k will be constant after just a fewtime steps. So we can run the H • gainequations off line until we see K k con-verge to a constant matrix, and thensimply hard-code that constant matrixinto our embed ded system. The r esultis called the steady-state H • filter.

As an aside, notice that we can dothe same thing with the Kalman filterequations shown at the beginning of this article. We can compute theKalman gain matrix K k off line untilwe see it converge to a constantmatrix. Then we can hard-code thatconstant matrix into our embeddedKalman filter. The result is called thesteady-state Kalman filter.

But does the steady-state H • filterper form as well as the t ime-varying H •filter? And does the steady-state

Kalman filter perform as well as thetime-varying Kalman filter? We cansave computational resources by usingthe steady-state versions of the filter,but how much performance will wesacrifice? The answer depends on theapplication, but often we lose very lit-tle performance. In many cases, thedrop in performance will be negligi-ble. We will see this in the simulationresults that follow.

6 OCTOBER 2001 Embedded Systems Programming

h - i nf i ni t

y f i l t

e r

s

LISTING 1 H• filter equations

float CT[n][r]; // This is the matrix CTfloat CTVinv[n][r]; // This is the matrix CT*V 1

float CTVinvC[n][n]; // This is the matrix CT*V 1*Cfloat CTVinvCP[n][n]; // This is the matrix CT*V 1*C*Pfloat QP[n][n]; // This is the matrix Q*P;float QPgamma[n][n]; // This is the matrix Q*P*gammafloat EYEQPgamma[n][n]; // This is the matrix EYE Q*P*gammafloat Linv[n][n]; // This is the inverse of the L matrixfloat L[n][n]; // This is the L matrixfloat AP[n][n]; // This is the matrix A*Pfloat APL[n][n]; // This is the matrix A*P*L

float APLCT[n][r]; // This is the matrix A*P*L*CTfloat K[n][r]; // This is the H• gain matrixfloat Cxhat[r][1]; // This is the vector C*xhatfloat yCxhat[r][1]; // This is the vector y C*xhatfloat KyCxhat[n][1]; // This is the vector K*(y C*xhat)float Axhat[n][1]; // This is the vector A*xhatfloat Bu[n][1]; // This is the vector B*ufloat AxhatBu[n][1]; // This is the vector A*xhat+B*ufloat AT[n][n]; // This is the matrix ATfloat APLAT[n][n]; // This is the matrix A*P*L*AT

// The following sequence of function calls computes the L matrix.MatrixTranspose((float*)C, r, n, (float*)CT);MatrixMultiply((float*)CT, (float*)Vinv, n, r, r, (float*)CTVinv);MatrixMultiply((float*)CTVinv, (float*)C, n, r, n, (float*)CTVinvC);MatrixMultiply((float*)CTVinvC, (float*)P, n, n, n, (float*)CTVinvCP);

MatrixMultiply((float*)Q, (float*)P, n, n, n, (float*)QP);MatrixScalarMultiply((float*)QP, gamma, n, n, (float*)QPgamma);MatrixSubtraction((float*)EYE, (float*)QPgamma, n, n, (float*)EYEQPgamma);MatrixAddition((float*)EYEQPgamma, (float*)CTVinvCP, n, n, (float*)Linv);MatrixInversion((float*)Linv, n, (float*)L);

// The following sequence of function calls computes the K matrix.MatrixMultiply((float*)A, (float*)P, n, n, n, (float*)AP);MatrixMultiply((float*)AP, (float*)L, n, n, n, (float*)APL);MatrixMultiply((float*)APL, (float*)CT, n, n, r, (float*)APLCT);MatrixMultiply((float*)APLCT, (float*)Vinv, n, r, r, (float*)K);

// The following sequence of function calls updates the xhat vector.MatrixMultiply((float*)C, (float*)xhat, r, n, 1, (float*)Cxhat);MatrixSubtraction((float*)y, (float*)Cxhat, r, 1, (float*)yCxhat);

MatrixMultiply((float*)K, (float*)yCxhat, n, r, 1, (float*)KyCxhat);MatrixMultiply((float*)A, (float*)xhat, n, n, 1, (float*)Axhat);MatrixMultiply((float*)B, (float*)u, n, r, 1, (float*)Bu);MatrixAddition((float*)Axhat, (float*)Bu, n, 1, (float*)AxhatBu);MatrixAddition((float*)AxhatBu, (float*)KyCxhat, n, 1, (float*)xhat);

Listing 1 continued on p. #.

// The following sequence of function calls updates the P matrix.MatrixTranspose((float*)A, n, n, (float*)AT);MatrixMultiply((float*)APL, (float*)AT, n, n, n, (float*)APLAT);MatrixAddition((float*)APLAT, (float*)W, n, n, (float*)P);

Page 6: DanSimon-H_infinityFilters

8/8/2019 DanSimon-H_infinityFilters

http://slidepdf.com/reader/full/dansimon-hinfinityfilters 6/8

Vehicle navigationTo demonstrate the H • filter, we willconsider a vehicle navigation prob-lem, similar to the one that we usedfor o ur Kalman filter examp le. Say wehave a vehicle traveling along a road,and we are measuring its velocity(instead of position). The velocitymeasuremen t error is 2 feet/ sec (o nesigma), th e acceleration n oise ( due topotholes, gusts of wind, motor irregu-larities, and so on) is 0.2 feet/ sec 2

(one sigma), and the position is mea-sured 10 times per second. The linearmodel that represents this system issimilar to the model discussed in theKalman filter article. The xk+1 equa-

tion is the same, but the yk equation isdifferent. In the Kalman filter article,we measured position, and in this arti-cle we are measuring velocity.

The state vector x is composed of vehicle position and velocity, u is theknown commanded vehicle accelera-

tion, and w is acceleration noise. Themeasured output y is the measuredvelocity corrupted by measurementnoise z. As explained in th e Kalman fil-ter article, we derive the Sz and S w

matrices used in the Kalman filterequations as follows:

It looks, tho ugh , as if a fly has land -ed in the ointment. We thought thatthe velocity measurement noise wasGaussian with a standard deviation of 2 feet/ sec. In r eality, the m easurem entno ise is un iformly distributed between±1 foot/ sec. That is, the tru e measure-ment noise is a random number withall values between ±1 foot/ sec equallylikely. In addition, we thought that theacceleration noise had a standard

deviation o f 0.2 feet/ sec 2. In reality,the acceleration noise is twice as badas we thought. It actually has a stan-dard deviation of 0.4 feet/ sec 2. This isbecause we took the road less traveled,and it has more potholes than expect-ed .

When we simulate the Kalman fil-ter and the H • filter ( g =0.01) for thisproblem, we obtain the results shownin Figures 1 an d 2 for th e velocity esti-mation error and the position estima-tion error. Figure 1 shows that thevelocity error is noticeably less for theH • filter than for the Kalman filter.Figure 2 shows that the p osition erroris much better with the H • filter. The

Kalman filter does not perform as wellas we exp ect becau se it is using Sz an dSw matrices that do not accuratelyreflect reality. The H • filter, on theother hand, makes no assumptions atall about the noise w an d z, except thatnature is throwing the worst possiblenoise at our estimator.

Next, we consider the steady-stateH • filter. Figure 3 shows the H • filtergain matrix K k as a function of timefor th e first five second s of the simula-tion. Since we are estimating two

states, and we h ave on e m easurem ent,K k has two elements. Figure 3 showsthat the elements of K k quicklyappro ach th e app roximate steady statevalues K k (1) = 0.11 and K k (2) = 0.09.

Let’s pretend that we are poorstruggling embedded systems engi-neers who can’t afford a fancy enoughmicrocontroller to perform real-timematrix inversions. To compensate, wesimulate the steady-state H • filterusing th e h ard-coded gain m atrix:

The results are practically indistin-guishable from the time-varying H • fil-ter. We can avoid all the work associat-ed with real-time matr ix inversion an dpay virtually no penalty in terms of performance. The Matlab code that Iused to generate these results is avail-able at www.embedded.com/ code.htm . If

you use Matlab to run the code youwill get different results every timebecause of the random noise that issimulated. Sometimes the Kalman fil-ter will perform even better than theH • filter. But the H • filter generallyoutper forms the Kalman filter for thisexample.

Pseudocode for implementing anH • filter in a high-level language isshown in Listing 1. This code assumesthat the linear system has n states, minputs, and r outputs. The followingvariables are assum ed:

A: an n n matrixB: an n m matrixC

: an r n matrixxhat : an n 1 vectory: an r 1 vectoru: an m 1 vectorP: an n n matrixVinv : an r r matrixW: an n n matrixEYE : the n n identity matrixQ: an n n matrixgamma : a scalar

Infinity and beyondThe simple exam ple shown in this arti-

cle demonstrates the potential bene-fits of H • filtering. However, H • theo-ry is actually much more general thanwe have discussed here. It can also beused to design control systems (not

just estimators) that are robust tounknown noise sources. Our exampleshowed the robustness of H • filteringto unknown noise sources. However,the H • filtering problem can also beformu lated in such a way that th e filteris rob ust to system mod eling err ors. Inour example, we assumed that the sys-

tem mod el was given by:

where A an d B were known matrices.However, suppose we have someun certainty in ou r mod el. Supp ose thesystem model is given by:

x A A x B B u wk k k k += +( ) + +( ) +

1

x Ax Bu wk k k k += + +

1

K = 0 110 09

.

.

S

S

z

w

=

=

100

10 2 10

2 10 4 10

6 5

5 4

x x u w

y x z

k k k k

k k k

+= + +

= [ ] +

1

1 0 1

0 1

0 005

0 1

0 1

. .

.

Embedded Systems Programming OCTOBER 2001 7

Page 7: DanSimon-H_infinityFilters

8/8/2019 DanSimon-H_infinityFilters

http://slidepdf.com/reader/full/dansimon-hinfinityfilters 7/8

where, as before, A an d B are known,bu t D A an d D B are un known matrices.We may have some bounds on D A an dD B, but we do n ot kn ow the ir exact val-ues. Then the H • filtering problemcan be reformulated to minimize theeffect of these modeling error s on ou restimation error. The H • filter cantherefore be designed to be robust touncertainty in the system model.

It appears that H • filterin g was firstintro du ced in 1987 by Mike Grimble, aprofessor at the University of

Strathclyde in the UK. It has its rootsin th e math ematics developed in 1981by George Zames, a professor atMcGill University in Montreal. Sincethe late 1980s, there have been severaldifferent approaches to the formula-tion of the H • filter. In Kalman filter-ing, different approaches all lead tothe same (or similar) equations. WithH • filtering, different approacheslead to widely different equations.Perhaps this is one r eason that H • fil-tering is not as well known as Kalman

filtering; so many different formula-tions of the H • filter ar e available th atit is difficult to get an overarchin g viewof the en tire field.

The Kalman filter is more widelyused and understood, and generallygives better perform ance than the H •filter. The H • filter requires more tun -ing to get acceptable performance.Looking at th e H • filter equ ations, wecan see that the tuning parametersinclude g , P0, V , W , and Q. That’s a lotof param eters to tu ne , especially whe n

P0, V , W , and Q are all matrices. Forthis reason, it is often m ore d ifficult totune an H • filter than a Kalman filter.However, the effort required to tunean H • filter can be worth while, as seenfrom the simulation results in this arti-cle.

Since Kalman filtering generallyperforms better than H • filtering, butthe H • approach is so intuitivelyappealing, some researcher s have p ro-

posed mixed Kalman/ H • filtering.This is usually referred to as mixedH 2 / H • filtering. This is an approachto find the best state estimator in theKalman filter sense subject to the con-straint that the maximum estimationerror is bound ed.

Dan Simon is a professor in the electricaland computer engineering department at Cleveland State University and a consul-tant to industry. His teaching and researchinterests include filtering, control theory,

embedded systems, fuzzy logic, and neuralnetworks. He teaches a course on H • -based control and estimation, and is presentlyworking on a project to evaluate aircraft engine health using H • filtering. You cancontact him at [email protected].

Endnotes1. Also discussed in the Kalman filtering

article was the concept of linear sys-tems, which should be considered a pre-requisite to this article.

2. The Kalman filter is also called the H 2

filter because it minimizes the two-normof the t ransfer function f rom the systemnoise to the state estimation error.

3. Note that D has to be square in orderfor the concept of eigenvector to exist.If D is not square, then, by definition, itdoes not have any eigenvalues. The g vector in the eigenvalue equation iscalled an eigenvector, but we will notconcern ourselves with eigenvectors inthis article.

4. M ore specifically, this is the two-normof a vector. However, we will refer to it

simply as the norm for ease of notation.

For further readingShen, W. and L. Deng. “ Game Theory

Approach to Discrete H • Filter Design,”IEEE Transactions on Signal Processing ,pp. 1092-1095, April 1997.This is an academic paper that derivesthe equations on which this article isbased.

Simon, D. and H. El-Sherief. “ Hybrid

Kalman/ Minimax Filtering in Phase-Locked Loops,” Control Engineering Practice , pp. 615-623, 1996.This paper presents a formulation of theH• filter that is different from the onegiven in this article. The H • filter is thencombined with a Kalman filter to obtaina new estimator that performs betterthan either the Kalman filter or the H •filter.

www.i nnovatia.com/softw are/ papers/ minimax.htm This web page (and its links) summarizethe approach and results of the ControlEngineering Practice paper referencedabove.

Burl, J. Linear Optimal Control . Reading,

MA: Addison Wesley, 1999.This is one of the best and most accessi-ble texts that covers H • filtering.However, that’s not saying much. Therearen’t many books on the subject, per-haps because of its novelty. One goodpoint about this book is that the authormakes all of the Matlab files used in theexamples available on the web.However, t his text is limited to continu-ous time systems and is full of confusingtypographical errors. Hopefully theseproblems will be addressed in a future

printing.Green, M . and D. Limebeer. Linear Robust

Control . Englewood Cliffs, NJ: PrenticeHall, 1995.This reference, slightly more academical-ly oriented than Burl’s book, offers goodtheoretical coverage of discrete time H •filt ering, but it doesn’t have any exam-ples. It is out of print but, you can findcopies (new and used) from bookstoreson the Internet.

8 OCTOBER 20 01 Embedded Systems Programming

h - i nf i ni t

y f i l t

e r

s

Page 8: DanSimon-H_infinityFilters

8/8/2019 DanSimon-H_infinityFilters

http://slidepdf.com/reader/full/dansimon-hinfinityfilters 8/8

Embedded Systems Programming OCTOBER 2001 9