Chap8 Kalman Mapping Howie

Embed Size (px)

Citation preview

  • 8/18/2019 Chap8 Kalman Mapping Howie

    1/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Localization, Mapping, SLAM andThe Kalman Filter according to George

    Robotics Institute 16-735

    http://voronoi.sbp.ri.cmu.edu/~motion

    Howie Choset

    http://voronoi.sbp.ri.cmu.edu/~choset

  • 8/18/2019 Chap8 Kalman Mapping Howie

    2/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    The Problem

    • What is the world around me (mapping)

     – sense from various positions

     – integrate measurements to produce map

     – assumes perfect knowledge of position

    • Where am I in the world (localization)

     – sense

     – relate sensor readings to a world model

     – compute location relative to model

     – assumes a perfect world model

    • Together, these are SLAM (Simultaneous Localization andMapping)

  • 8/18/2019 Chap8 Kalman Mapping Howie

    3/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Localization

    Tracking: Known initial position

    Global Localization: Unknown initial position

    Re-Localization: Incorrect known position

    (kidnapped robot problem)

    Challenges – Sensor processing

     – Position estimation

     – Control Scheme

     – Exploration Scheme

     – Cycle Closure

     – Autonomy

     – Tractability

     – ScalabilitySLAM

    Mapping while tracking locally and globally

  • 8/18/2019 Chap8 Kalman Mapping Howie

    4/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Representations for Robot Localization

    Discrete approaches (’95)• Topological representation (’95)• uncertainty handling (POMDPs)• occas. global localization, recovery

    • Grid-based, metric representation (’96)

    • global localization, recovery

    Multi-hypothesis (’00)• multiple Kalman filters• global localization, recovery

    Particle filters (’99)

    • sample-based representation• global localization, recovery

    Kalman filters (late-80s?)• Gaussians• approximately linear models• position tracking

    AI

    Robotics

  • 8/18/2019 Chap8 Kalman Mapping Howie

    5/64

  • 8/18/2019 Chap8 Kalman Mapping Howie

    6/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Three Major Map Models

    Localize to nodes Arbitrary localizationDiscrete localizationResolution vs. Scale

    Frontier-basedexploration

    Grid size and resolution

    Grid-Based

    Graph explorationNo inherent explorationExploration

    Strategies

    Minimal complexityLandmark covariance (N2)Computational

    Complexity

    TopologicalFeature-Based

  • 8/18/2019 Chap8 Kalman Mapping Howie

    7/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

     Atlas Framework

    • Hybrid Solution:

     – Local features extracted from local grid map. – Local map frames created at complexity limit.

     – Topology consists of connected local map frames.

     Authors: Chong, Kleeman; Bosse, Newman, Leonard, Soika, Feiten, Teller 

  • 8/18/2019 Chap8 Kalman Mapping Howie

    8/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    H-SLAM

  • 8/18/2019 Chap8 Kalman Mapping Howie

    9/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    What does a Kalman Filter do, anyway?

     x k F k x k G k u k v k 

     y k H k x k w k 

    ( ) ( ) ( ) ( ) ( ) ( )

    ( ) ( ) ( ) ( )

    + = + +

    = +

    1

     x k n

    u k m

     y k p

    F k G k H k  

    v k w k  

    Q k R k  

    ( )

    ( )

    ( )( ), ( ), ( )

    ( ), ( )

    ( ), ( )

     is the - dimensional state vector (unknown)

     is the - dimensional input vector (known)

     is the - dimensional output vector (known, measured) are appropriately dimensioned system matrices (known)

     are zero - mean, white Gaussian noise with (known)

    covariance matrices

    Given the linear dynamical system:

    the Kalman Filter is a recursion that provides the

    “best” estimate of the state vector  x.

  • 8/18/2019 Chap8 Kalman Mapping Howie

    10/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    What’s so great about that?

    • noise smoothing (improve noisy measurements)

    • state estimation (for state feedback)

    • recursive (computes next estimate using only most recent measurement)

     x k F k x k G k u k v k 

     y k H k x k w k 

    ( ) ( ) ( ) ( ) ( ) ( )

    ( ) ( ) ( ) ( )

    + = + +

    = +

    1

  • 8/18/2019 Chap8 Kalman Mapping Howie

    11/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    How does it work?

     x k F k x k G k u k v k 

     y k H k x k w k 

    ( ) ( ) ( ) ( ) ( ) ( )

    ( ) ( ) ( ) ( )

    + = + +

    = +

    1

    )|1(ˆ)()(ˆ

    )()()|(ˆ)()|1(ˆ

    k k  xk  H k  y

    k uk Gk k  xk F k k  x

    +=

    +=+

    1.  prediction based on last estimate:

    2. calculate correction based on prediction and current measurement:

    ( ))|1(ˆ),1(   k k  xk  y f  x   ++=Δ

    3. update prediction:   xk k  xk k  x   Δ++=++ )|1(ˆ)1|1(ˆ

  • 8/18/2019 Chap8 Kalman Mapping Howie

    12/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Finding the correction (no noise!)

     Hx y =

    { } y Hx x   ==Ω |

    “best” estimate comes from shortest  xΔ

    (k+1|k) x̂

    Want the best estimate to be consistent

    with sensor readings

    .of estimate best""theis)|1(ˆˆthatsofind ,outputand )|1(ˆ predictionGiven

     x xk k  x x x yk k  x   Δ++=Δ+

  • 8/18/2019 Chap8 Kalman Mapping Howie

    13/64

  • 8/18/2019 Chap8 Kalman Mapping Howie

    14/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Some linear algebra

    { } y Hx x   ==Ω |

    a is parallel to Ω if Ha = 0

    }0|0{)(   =≠=   Haa H  Null

    a is parallel to Ω if it lies in thenull space of H

    )(if ),(allfor

     H columnbbv H  Nullv   ∈⊥∈Weighted sum of columns means columnsof sumweighted the,γ   H b =

  • 8/18/2019 Chap8 Kalman Mapping Howie

    15/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Finding the correction (no noise!)

    .of estimate best""theis)|1(ˆˆthatsofind ,outputand )|1(ˆ predictionGiven

     x xk k  x x x yk k  x   Δ++=Δ+

     Hx y =

    { } y Hx x   ==Ω |

    “best” estimate comes from shortest  xΔ

    shortest is perpendicular to xΔ   Ω

    ( )⊥∈Δ⇒   H  x null   ( )T  H  x column∈Δ⇒γ  

    T  H  x =Δ⇒

    { } y Hx x   ==Ω |

    )|1(ˆ   k k  x   +•

     xΔ

  • 8/18/2019 Chap8 Kalman Mapping Howie

    16/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Finding the correction (no noise!)

    .of estimate best""theis)|1(ˆˆthatsofind ,outputand )|1(ˆ predictionGiven

     x xk k  x x x yk k  x   Δ++=Δ+

     Hx y =

    { } y Hx x   ==Ω |

    “best” estimate comes from shortest  xΔ

    shortest is perpendicular to xΔ   Ω

    ( )⊥∈Δ⇒   H  x null   ( )T  H  x column∈Δ⇒γ  

    T  H  x =Δ⇒

    { } y Hx x   ==Ω | xΔ)|1(ˆ   k k  x   +•

    ))|1(ˆ())|1(ˆ(   k k  x x H k k  x H  y   +−=+−=ν innovation

    Real output – estimated output

  • 8/18/2019 Chap8 Kalman Mapping Howie

    17/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Finding the correction (no noise!)

    .of estimate best""theis)|1(ˆˆthatsofind ,outputand )|1(ˆ predictionGiven

     x xk k  x x x yk k  x   Δ++=Δ+

     Hx y =

    { } y Hx x   ==Ω |

    “best” estimate comes from shortest  xΔ

    shortest is perpendicular to xΔ   Ω

    ( )⊥∈Δ⇒   H  x null   ( )T  H  x column∈Δ⇒γ  

    T  H  x =Δ⇒

    assume γ is a linear function of ν 

    ν K  H  x  T =Δ⇒

    K mm matrixsomefor ×

    { } y Hx x   ==Ω | xΔ)|1(ˆ   k k  x   +•

    Guess, hope, lets face it, it has to be somefunction of the innovation

  • 8/18/2019 Chap8 Kalman Mapping Howie

    18/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Finding the correction (no noise!)

    .of estimate best""theis)|1(ˆˆthatsofind ,outputand ˆ predictionGiven

     x xk k  x x x y x   Δ++=Δ−

     Hx y =

    { } y Hx x   ==Ω |

    we require

     y xk k  x H    =Δ++ ))|1(ˆ(

    { } y Hx x   ==Ω | xΔ)|1(ˆ   k k  x   +•

  • 8/18/2019 Chap8 Kalman Mapping Howie

    19/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Finding the correction (no noise!)

     Hx y =

    { } y Hx x   ==Ω |

    we require

    ν =+−=+−=Δ⇒ ))|1(ˆ()|1(ˆ   k k  x x H k k  x H  y x H 

    { } y Hx x   ==Ω | xΔ)|1(ˆ   k k  x   +•

     y xk k  x H    =Δ++ ))|1(ˆ(

    .of estimate best""theis)|1(ˆˆthatsofind ,outputand ˆ predictionGiven

     x xk k  x x x y x   Δ++=Δ−

  • 8/18/2019 Chap8 Kalman Mapping Howie

    20/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Finding the correction (no noise!)

     Hx y =

    { } y Hx x   ==Ω |ν ν  =K  HH T 

    substituting yieldsν K  H  x  T =Δ

    { } y Hx x   ==Ω | xΔ

    we require

    )|1(ˆ   k k  x   +•   ν =+−=+−=Δ⇒ ))|1(ˆ()|1(ˆ   k k  x x H k k  x H  y x H 

     y xk k  x H    =Δ++ ))|1(ˆ(

    .of estimate best""theis)|1(ˆˆthatsofind ,outputand ˆ predictionGiven

     x xk k  x x x y x   Δ++=Δ−

    Δx must be perpendicular to Ω b/c anything in therange space of HT is perp to Ω

  • 8/18/2019 Chap8 Kalman Mapping Howie

    21/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Finding the correction (no noise!)

     Hx y =

    { } y Hx x   ==Ω |ν ν  =K  HH T 

    substituting yieldsν K  H  x  T =Δ

    { } y Hx x   ==Ω | xΔ

    we require

    )|1(ˆ   k k  x   +•   ν =+−=+−=Δ⇒ ))|1(ˆ()|1(ˆ   k k  x x H k k  x H  y x H 

     y xk k  x H    =Δ++ ))|1(ˆ(

    .of estimate best""theis)|1(ˆˆthatsofind ,outputand ˆ predictionGiven

     x xk k  x x x y x   Δ++=Δ−

    ( ) 1−=⇒   T  HH K 

    The fact that the linear solution solves the equation makes assuming K is linear a kosher guess

  • 8/18/2019 Chap8 Kalman Mapping Howie

    22/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Finding the correction (no noise!)

     Hx y =

    { } y Hx x   ==Ω |ν ν  =K  HH T 

    substituting yieldsν K  H  x  T =Δ

    { } y Hx x   ==Ω | xΔ

    we require

    )|1(ˆ   k k  x   +•   ν =+−=+−=Δ⇒ ))|1(ˆ()|1(ˆ   k k  x x H k k  x H  y x H 

     y xk k  x H    =Δ++ ))|1(ˆ(

    .of estimate best""theis)|1(ˆˆthatsofind ,outputand ˆ predictionGiven

     x xk k  x x x y x   Δ++=Δ−

    ( ) 1−=⇒   T  HH K 

    ( )   ν 1−=Δ   T T  HH  H  x

  • 8/18/2019 Chap8 Kalman Mapping Howie

    23/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

     A Geometric Interpretation

    { } y Hx x   ==Ω |

     xΔ

    )|1(ˆ   k k  x   +•

  • 8/18/2019 Chap8 Kalman Mapping Howie

    24/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

     A Simple State Observer 

    )()(

    )()()1(

    k  Hxk  y

    k Guk Fxk  x

    =

    +=+

    )()|(ˆ)|1(ˆ   k Guk k  xF k k  x   +=+

    ( )   ( ))|1(ˆ)1(1 k k  x H k  y HH  H  x   T T  +−+=Δ   −

     xk k  xk k  x   Δ++=++ )|1(ˆ)1|1(ˆ

    System:

    1. prediction:

    2. compute correction:

    3. update:

    Observer:

  • 8/18/2019 Chap8 Kalman Mapping Howie

    25/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Caveat #1

     Note: The observer presented here is not a very goodobserver. Specifically, it is not guaranteed to converge forall systems. Still the intuition behind this observer is thesame as the intuition behind the Kalman filter, and the

     problems will be fixed in the following slides.

    It really corrects only to the current sensor information,so if you are on the hyperplane but not at right place, youhave no correction…. I am waiving my hands here, look in book

  • 8/18/2019 Chap8 Kalman Mapping Howie

    26/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    where is the covariance matrix

    Estimating a distribution for x

    Our estimate of x is not exact!We can do better by estimating a joint Gaussian distribution p(x).

    ( ))ˆ()ˆ(2

    1

    2/12/

    1

    )2(

    1)(

     x xP x x

    n

    eP

     x p−−

    −   −

    =π 

     x x x x E P )ˆ)(ˆ(   −−=

  • 8/18/2019 Chap8 Kalman Mapping Howie

    27/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Finding the correction (geometric intuition)

    .of estimate probable)most(i.e. best""theis)|1(ˆˆ

     thatsofind ,outputand ,covariance,)|1(ˆ predictionGiven

     x xk k  x x

     x yPk k  x

    Δ++=

    Δ+

    { } y Hx x   ==Ω |

    )|1(ˆ   k k  x   +•

     xΔ xP x

     xk k  xk k  x

     x

    T  ΔΔ

    Δ++=++

    Δ

    −1minimizes2.

     )|1(ˆ)1|1(ˆsatisfies 1. 

    :thatonetheis probablemostThe

    ( ))ˆ()ˆ(2

    1

    2/12/

    1

    )2(

    1)(

     x xP x x

    n

    eP

     x p−−

    −   −

    =π 

  • 8/18/2019 Chap8 Kalman Mapping Howie

    28/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

     A new kind of distance

    : betoon productinnernewadefineweSuppose   n R

     ) productinnerold thereplaces(this , 2121

    121   x x xP x x x

      T T    −

    =

     xP x x x x  T  12 , normnewadefinecanThen we   −==

    .toorthogonalisso,onto

     )|1(ˆof  projectionorthogonaltheisminimizesthatinˆThe

    ΩΔΩ+ΔΩ

     x

    k k  x x x

    )(Tinfor0,   H null x   =Ω=Δ⇒   ω ω 

    )(iff 0, 1   T T  PH column x xP x   ∈Δ=Δ=Δ   −ω ω 

  • 8/18/2019 Chap8 Kalman Mapping Howie

    29/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Finding the correction (for real this time!)

    )|1(ˆinlinearisthatAssuming   k k  x H  y x   +−=Δ   ν 

     ν K PH  x   T =Δ

    ν =+−=Δ⇒Δ++= )|1(ˆ ))|1(ˆ(conditionThe   k k  x H  y x H  xk k  x H  y

    :yieldsonSubstituti

     ν ν    K  HPH  x H    T ==Δ

    ( )  1−=⇒   T  HPH K 

    ( )  1ν −=Δ∴   T T   HPH PH  x

  • 8/18/2019 Chap8 Kalman Mapping Howie

    30/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

     A Better State Observer 

    We can create a better state observer following the same 3. steps, but now wemust also estimate the covariance matrix P.

    Step 1: Prediction

    We start with x(k|k ) and  P(k|k )

    )()|(ˆ)|1(ˆ   k Guk k  xF k k  x   +=+

    What about P? From the definition:

    T k k  xk  xk k  xk  x E k k P ))|(ˆ)())(|(ˆ)(()|(   −−=

    and 

    T k k  xk  xk k  xk  x E k k P ))|1(ˆ)1())(|1(ˆ)1(()|1(   +−++−+=+

    Where did noise go?Expected value…

    )()(

    )()()()1(

    k  Hxk  y

    k vk Guk Fxk  x

    =

    ++=+

    Sample of Guassian Dist. w/COV Q

  • 8/18/2019 Chap8 Kalman Mapping Howie

    31/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Continuing Step 1

    To make life a little easier, lets shift notation slightly:

    k k k k k   x x x x E P

    )(ˆ

    ( 11111−

    ++

    ++

    +   −−=( )( )T k k k k k k k k k k    Gu xF vGuFxGu xF vGuFx E  )ˆ()ˆ(   +−+++−++=

    ( )( ) ( )( )T k k k k k k    v x xF v x xF  E    +−+−= ˆˆ

    ( )( ) ( )   T k T k k T T k k k k  k k  vvv x xF F  x x x xF  E    +−+−−= ˆ2ˆˆ

    ( )( )( )   ( )T k T T k k k k  k vv E F  x x x xFE    +−−= ˆˆQF FP

      T 

    k    +=QF k k FPk k P

      T  +=+ )|()|1(

  • 8/18/2019 Chap8 Kalman Mapping Howie

    32/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Step 2: Computing the correction

    ).|1(and )|1(ˆgetwe1stepFrom   k k Pk k  x   ++

    :computetotheseusewe Now   xΔ

    ( )   ( ) )|1(ˆ)1()|1()|1( 1 k k  x H k  y H k k  HP H k k P x   T  +−+++=Δ   −

    For ease of notation, define W so that

     ν W  x =Δ

  • 8/18/2019 Chap8 Kalman Mapping Howie

    33/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Step 3: Update

     )|1(ˆ)1|1(ˆ   ν W k k  xk k  x   ++=++

    ( )T k k k k k    x x x x E P )ˆ)(ˆ( 11111   +++++   −−=T 

    k k k k    W  x xW  x x E  )ˆ)(ˆ( 1111   ν ν    −−−−=

      −

    ++

    ++

    (just take my word for it…)

    T T W  H k k WHPk k Pk k P )|1()|1()1|1(   +−+=++

  • 8/18/2019 Chap8 Kalman Mapping Howie

    34/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Just take my word for it…

    ( )T k k k k k    x x x x E P )ˆ)(ˆ( 11111   +++++   −−=T 

    k k k k    W  x xW  x x E  )ˆ

    )(ˆ

    ( 1111   ν ν    −−−−=

      −

    ++

    ++

    ( )( )   ⎟ ⎠ ⎞⎜

    ⎝ ⎛  −−−−=   −++

    −++

    k k k k    W  x xW  x x E    ν ν  )ˆ()ˆ( 1111

    ( )( )T T k k T k k k k    W W  x xW  x x x x E    ν ν ν    +−−−−=   −++−++−++ )ˆ(2)ˆ)(ˆ( 111111T T T 

    k k k k T 

    k k k k k    W  H  x x x xWH  x x x xWH  E P )ˆ)(ˆ()ˆ)(ˆ(2 111111111−++

    −++

    −++

    −++

    −+   −−+−−−+=

    T T k k k    W  H WHPWHPP

      −+

    −+

    −+   +−= 111 2

    ( )   T T k k T k T k k    W  H WHP HP H  HP H PP   −+−+−−

    +−+

    −+   +−= 11

    1

    111 2

    ( ) ( )( )   T T k k T k T k T k T k k    W  H WHP HP H  HP H  HP H  HP H PP   −+−+−−

    +−+

    −−+

    −+

    −+   +−= 11

    1

    11

    1

    111 2

    T T k 

    T T k k    W  H WHPW  H WHPP

      −+

    −+

    −+   +−= 111 2

  • 8/18/2019 Chap8 Kalman Mapping Howie

    35/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Better State Observer Summary

    )()(

    )()()()1(

    k  Hxk  y

    k vk Guk Fxk  x

    =

    ++=+System:

    1. Predict

    2. Correction

    3. Update

    )()|(ˆ)|1(ˆ   k Guk k  xF k k  x   +=+

    QF k k FPk k P  T  +=+ )|()|1(

    ( )  )|1()|1( 1−++=   T  H k k  HP H k k PW ( ) )|1(ˆ)1(   k k  x H k  yW  x   +−+=Δ

     )|1(ˆ)1|1(ˆ   ν W k k  xk k  x   ++=++T T W  H k k WHPk k Pk k P )|1()|1()1|1(   +−+=++

         O     b   s   e   r   v   e   r

  • 8/18/2019 Chap8 Kalman Mapping Howie

    36/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    •Note: there is a problem with the previous slide, namely the covariance matrix ofthe estimate P will be singular. This makes sense because with perfect sensor

    measurements the uncertainty in some directions will be zero. There is nouncertainty in the directions perpendicular to Ω

    P lives in the state space and directions associated with sensor noise are zero. Inthe step when you do the update, if you have a zero noise measurement, you end

    up squeezing P down.

    In most cases, when you do the next prediction step, the process covariance matrixQ gets added to the P(k|k), the result will be nonsingular, and everything is ok again.

    There’s actually not anything wrong with this, except that you can’t really call theresult a “covariance” matrix because “sometimes” it is not a covariance matrix

    Plus, lets not be ridiculous, all sensors have noise.

  • 8/18/2019 Chap8 Kalman Mapping Howie

    37/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Finding the correction (with output noise)

    w Hx y   +=

    { } y Hx x   ==Ω |

    )|1(ˆ   k k  x   +•

    The previous results require that you knowwhich hyperplane to aim for. Because thereis now sensor noise, we don’t know where toaim, so we can’t directly use our method.

    .If we can determine which hyperplane aimfor, then the previous result would apply.

    We find the hyperplane in question as follows:

    1. project estimate into output space2. find most likely point in output space based on

    measurement and projected prediction3. the desired hyperplane is the preimage of this point

  • 8/18/2019 Chap8 Kalman Mapping Howie

    38/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Projecting the prediction(putting current state estimates into sensor space)

    )|1(ˆˆ)|1(ˆ   k k  x H  yk k  x   +=→+T 

     H k k  HP Rk k P )|1(ˆ)|1(   +=→+

    )|1(ˆ   k k  x   +   •

    )|1(   k k P   +

     ŷ   •

     R̂

    state space (n-dimensional) output space (p-dimensional)

  • 8/18/2019 Chap8 Kalman Mapping Howie

    39/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Finding most likely output

    The objective is to find the most likely output that resultsfrom the independent Gaussian distributions

    ),(   R y)ˆ,ˆ(   R y

    measurement and associate covariance

     projected prediction (what you think your measurementsshould be and how confident you are)

    Fact (Kalman Gains): The product of two Gaussian distributions given bymean/covariance pairs (x1,C1) and (x2,C2) is proportional to a third Gaussianwith mean

    and covariance

    where

    )( 1213   x xK  x x   −+=

    113   KC C C    −=( ) 1211

    −+=   C C C K 

    nce independent, so multiply them because we want them both to be true at the same time

    Strange, but true,this is symmetric

  • 8/18/2019 Chap8 Kalman Mapping Howie

    40/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Most likely output (cont.)

    Using the Kalman gains, the most likely output is

    ( ) )ˆ(ˆˆˆ1*

     y y R R R y y   −⎟ ⎠ ⎞⎜⎝ ⎛  ++=

      −

    ( ) ))|1(ˆ()|1(ˆ 1  yk k  x H  R HPH  HPH k k  x H    T T  −++++=   −

  • 8/18/2019 Chap8 Kalman Mapping Howie

    41/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Finding the Correction

    *|   y Hx x   ==Ω

    − x̂•

     xΔ

     Now we can compute the correction as we did in the noiseless case, this time

    using y* instead of y. In other words, y* tells us which hyperplane to aim for.

    The result is:

    ( ) ( ) )|1(ˆ*1 k k  x H  y HPH PH  x   T T  +−=Δ   −

    ot going all the way to y, but splitting the difference between how confident you are with your Sensor and process noise

  • 8/18/2019 Chap8 Kalman Mapping Howie

    42/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Finding the Correction (cont.)

    ( ) ( )   ( )  )|1(ˆ)|1(ˆˆ 11 k k  x H k k  x H  y R HPH  HPH  x H  HPH PH    T T T T  +−+−++=   −−( ) ( ) )|1(ˆy

    *1

    k k  x H  HPH PH  x  T T 

    +−=Δ  −

    ( )   ( ))|1(ˆ1 k k  x H  y R HPH PH    T T  +−+=   −

    For convenience, we define

    ( ) 1−+=   R HPH PH W    T T 

    So that

    ( ) )|1(ˆ   k k  x H  yW  x   +−=Δ

  • 8/18/2019 Chap8 Kalman Mapping Howie

    43/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Correcting the Covariance Estimate

    The covariance error estimate correction is computed from

    the definition of the covariance matrix, in much the sameway that we computed the correction for the “betterobserver”. The answer turns out to be:

    T T W  H k k  HPW k k Pk k P )|1()|1()1|1(   +−+=++

  • 8/18/2019 Chap8 Kalman Mapping Howie

    44/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    LTI Kalman Filter Summary

    )()()(

    )()()()1(

    k wk  Hxk  y

    k vk Guk Fxk  x

    +=

    ++=+System:

    1. Predict

    2. Correction

    3. Update

    )()|(ˆ)|1(ˆ   k Guk k  xF k k  x   +=+

    QF k k FPk k P  T  +=+ )|()|1(

     )|1(ˆ)1|1(ˆ   ν W k k  xk k  x   ++=++T WSW k k Pk k P   −+=++ )|1()1|1(

         K   a     l   m   a   n     F     i     l    t   e   r

     )|1( 1−+=   S  H k k PW    T 

    ( ) )|1(ˆ)1(   k k  x H k  yW  x   +−+=Δ

    R )|1(   ++=  T 

     H k k  HPS 

    Kalman Filters

  • 8/18/2019 Chap8 Kalman Mapping Howie

    45/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Kalman Filters

    K l Filt f D d R k i

  • 8/18/2019 Chap8 Kalman Mapping Howie

    46/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Kalman Filter for Dead Reckoning

    • Robot moves along a straight line with state x = [xr , vr ]T

    • u is the force applied to the robot

    • Newton tells us or

    • Robot has velocity sensor 

    Process noisefrom a zeromeanGaussian V

    Sensor noise from azero mean Gaussian W

    S t

  • 8/18/2019 Chap8 Kalman Mapping Howie

    47/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Set up

     At some time k

    PUT ELLIPSE FIGURE HERE

     Assume

    Ob bilit

  • 8/18/2019 Chap8 Kalman Mapping Howie

    48/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Observability

    Recall from last time

     Actually, previous example is not observable but still nice to use Kalman filter 

    E t d d K l Filt

  • 8/18/2019 Chap8 Kalman Mapping Howie

    49/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Extended Kalman Filter 

    • Life is not linear 

    • Predict

    Extended Kalman Filter

  • 8/18/2019 Chap8 Kalman Mapping Howie

    50/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Extended Kalman Filter 

    • Update

  • 8/18/2019 Chap8 Kalman Mapping Howie

    51/64

    Be wise and linearize

  • 8/18/2019 Chap8 Kalman Mapping Howie

    52/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Be wise, and linearize..

    Data Association

  • 8/18/2019 Chap8 Kalman Mapping Howie

    53/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Data Association

    •BIG PROBLEMIth measurement corresponds to the jth landmark

    innovation

    where

    Pick the smallest

    Kalman Filter for SLAM (simple)

  • 8/18/2019 Chap8 Kalman Mapping Howie

    54/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Kalman Filter for SLAM (simple)

    state

    Process model

    Inputs are commands to x and y velocities, a bit naive

    Kalman Filter for SLAM

  • 8/18/2019 Chap8 Kalman Mapping Howie

    55/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Kalman Filter for SLAM

    Range Bearing

  • 8/18/2019 Chap8 Kalman Mapping Howie

    56/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Range Bearing

    Inputs are forward and rotational velocities

    Greg’s Notes: Some Examples

  • 8/18/2019 Chap8 Kalman Mapping Howie

    57/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Greg s Notes: Some Examples

    • Point moving on the line according to f = m a – state is position and velocity

     – input is force

     – sensing should be position

    • Point in the plane under Newtonian laws

    • Nonholonomic kinematic system (no dynamics) – state is workspace configuration

     – input is velocity command – sensing could be direction and/or distance to beacons

    • Note that all of dynamical systems are “open-loop” integration

    • Role of sensing is to “close the loop” and pin down state

    Kalman Filters

  • 8/18/2019 Chap8 Kalman Mapping Howie

    58/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    ),()(2t t t    N  x Bel   σ μ =

     Bel( xt +1−

    ) =

      μ t +1− =  μ t  + But 

    σ − t +12 = A2σ t 2 +σ act 2

    ⎨⎩

     Bel( xt + 2− ) =

      μ t +2− =  μ t +1 + But +1

    σ −

    t +22 = A2σ t +1

    2 +σ act 2

    ⎧⎨⎩

     Bel( xt +1) =  μ t +1 =  μ t +1

    − + K t +1(μ  zt  +1 −  μ t +1− )

    σ t +12 = (1− K t +1)σ 

    −t +12

    ⎧⎨⎩

    Kalman Filter Algorithm

  • 8/18/2019 Chap8 Kalman Mapping Howie

    59/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Kalman Filter Algorithm

    1.  Algorithm Kalman_filter ( , d ):

    2. If d is a perceptual data item y then3.

    4.

    5.

    6. Else if d is an action data item u then

    7.

    8.

    9. Return

    Σ−=Σ )(   KC  I 

    ( ) 1−Σ+ΣΣ=   obsT T  C C C K )(   μ μ μ    C  zK    −+=

    act 

    T  A A   Σ+Σ=Σ

     Bu A   +=   μ μ 

    Limitations

  • 8/18/2019 Chap8 Kalman Mapping Howie

    60/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    • Linearize it!• Determine Jacobians of dynamics f and observation

    function c w.r.t the current state x and the noise.

    Limitations

    • Very strong assumptions:

     – Linear state dynamics

     – Observations linear in state

    • What can we do if system is not linear? – Non-linear state dynamics

     – Non-linear observations

    act t t t    Bu AX  X    Σ++=−+1

    obst t    CX  Z    Σ+=

    ),,(1   act t t t    u X  f  X    Σ=−+

    ),( obst t    X c Z    Σ=

    )0,,(   t t  j

    iij   u x

     x

     f  A

    ∂∂

    = )0,,(   t t  jact 

    iij   u x

     f W 

    Σ∂∂

    =)0,(   t  j

    iij   x

     x

    cC 

    ∂∂

    = )0,(   t  jobs

    iij   x

    cV 

    Σ∂∂

    =

    Extended Kalman Filter Algorithm

  • 8/18/2019 Chap8 Kalman Mapping Howie

    61/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    Extended Kalman Filter Algorithm

    1. Algorithm  Extended_Kalman_filter( , d ):

    2. If d is a perceptual data item z then3.

    4.

    5.

    6. Else if d is an action data item u then

    7.

    8.

    9. Return

    Σ−=Σ )(   KC  I 

    ( ) 1−Σ+ΣΣ=   obsT T  C C C K )(   μ μ μ    C  zK    −+=

    act 

    T  A A   Σ+Σ=Σ

     Bu A   +=   μ μ 

    ( ) 1−Σ+ΣΣ=   T obsT T  V V C C C K ( ))0,(μ μ μ    c zK    −+=

    Σ−=Σ )(   KC  I 

    )0,,(   u f   μ μ  =T 

    act 

    T W W  A A   Σ+Σ=Σ

    Kalman Filter-based Systems (2)

  • 8/18/2019 Chap8 Kalman Mapping Howie

    62/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    • [Arras et al. 98]:

    • Laser range-finder and vision

    • High precision (

  • 8/18/2019 Chap8 Kalman Mapping Howie

    63/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    • Instead of linearizing, pass several points from the Gaussian through the non-linear transformation and re-compute a new Gaussian.

    • Better performance (theory and practice).

     

    )0,,('   u f   μ μ  =T 

    act 

    T W W  A A   Σ+Σ=Σ

     ’

    f  f 

     ’

    Kalman Filters and SLAM

  • 8/18/2019 Chap8 Kalman Mapping Howie

    64/64

    RI 16-735, Howie Choset, with slides from George Kantor, G.D. Hager, and D. Fox

    • Localization: state is the location of the robot

    • Mapping: state is the location of beacons

    • SLAM: state combines both

    • Consider a simple fully-observable holonomic robot

     – x(k+1) = x(k) + u(k) dt + v

     – yi(k) = pi - x(k) + w

    • If the state is (x(k),p1, p2 ...) then we can write a linear observationsystem – note that if we don’t have some fixed beacons, our system is unobservable

    (we can’t fully determine all unknown quantities)