Friday, August 06, 2010

Simple kalman filter example

There are a ton of Kalman filter overviews online, and lots of them give a general overview of what they do, but then they all hit a wall of variables and matrices, and fail to give good simple examples.

The best guide I found is a PDF scan of a much-faxed copy of Roger M. du Plessis' 1967 classic "Poor Man's Explanation of Kalman Filtering".

His paper is great because he starts with a single-variable filter (no matrices!) with no control component and no prediction algorithm. I'm going to try to give an even simpler example with softer terminology and more hand-waving.

Kalman filters are a way to take a bunch of noisy measurements of something, and perhaps also some predictions of how that something is changing, and maybe even some forces we're applying to that something, and to efficiently compute an accurate estimate of that something's true value.

Let's say we want to measure the temperature in a room. We think it's about 72 degrees, plus or minus 2 degrees. And we have a thermometer that gives uniformly random results within a range of +/-5 degrees of the true temperature.

We take a measurement with the thermometer and it reads 75. So what's our best estimate of the true temperature? Kalman filters use a weighted average to pick a point somewhere between our 72 degree guess and the 75 degree measurement. If the weight is large (approaching 1.0), we mostly trust our thermometer. If the weight is small, we mostly trust our guess and ignore the thermometer.

Here's how we choose the optimal weight, given the accuracy of our guess and the accuracy of the thermometer:

weight = temperature_variance / (temperature_variance + thermometer_variance)
0.29 = 2 / (2+5)

If temperature_variance were very large compared to thermometer_variance, weight would approach 1.0. That is, we'd ignore our guess and just use the measured value. Likewise, if thermometer_variance dominated, the weight would approach 0, and we'd put very little trust in our thermometer readings.

29% weight means we'll trust our guess more than the thermometer, which makes sense, because we think our guess is good to +/-2 degrees, whereas the thermometer was only good to +/-5.

Now we do the weighted average:

estimate = guess + weight*(measurement - guess)
72.87 = 72 + 0.29*(75-72)

or equivalently

estimate = (1-weight)*guess + weight*measurement
72.87 = 0.71*72 + 0.29*75

That is, we went 29% of the way from 72 to 75, or equivalently, we took 71% of the guess plus 29% of the measurement.

There's one last thing to compute: how confident we are in our estimate of 72.87 degrees. That equation is:

                      temperature_variance*thermometer_variance 
estimate_variance =   -----------------------------------------
                    (temperature_variance + thermometer_variance)

1.43 = 2*5 / (2+5)

So we think our estimate is correct to +/-1.43 degrees.

Now we have a guess that the temperature in the room is 72.87 degrees, +/-1.43 degrees. And we still have a thermometer that tells the temperature +/-5 degrees.

That's basically the situation where we started, so we can run the whole algorithm again:

First we compute the weight, using our new, more accurate guess confidence:

weight = temperature_variance / (temperature_variance + thermometer_variance)
0.22 = 1.43 / (1.43+5)


We take a measurement, and this time let's say it comes up as 71 degrees.

Now we can compute the weighted average of our old guess and our new measurement:

estimate = guess + weight*(measurement - guess)
72.46 = 72.87 + 0.22*(71-72.87)

And the new confidence level:

                      temperature_variance*thermometer_variance 
estimate_variance =   -----------------------------------------
                    (temperature_variance + thermometer_variance)

1.11 = 1.43*5 / (1.43+5)

So after the second measurement, we estimate that the actual temperature is 72.46 degrees, +/-1.11 degrees.

Kalman filters are nice because we don't have to remember the whole history of measurements and estimates. We just keep track of our most recent estimate and our confidence level in that estimate.

If we decide to turn on the air conditioner, so that the temperature in the room starts decreasing, we can expand our calculations to include that "control" variable, and use it to update our estimates by "predicting" how much colder it'll be at each measurement. Once I figure out how to do that, maybe I'll post a more complicated example :)

13 comments:

Robert said...

I think something is wrong.

I have an example set of values that step from 50 to 55. By the time the temperature "steps" up to 55, the temperature variance is so small that the estimate ignores the actual reading. Even after several iterations at 55, the filter is giving all the weight to estimate (98%).

Is this how the Kalman filter is suppose to work?

I was expecting it to filter the small variations to the "correct" value, but any large jumps would cause the estimate to jump quickly to the new state.

This is one of the few examples that I found that actually work out an example. Thank you!

Robert said...

Nothing is wrong.

The reason my example does not work is because the temperature changing is caused by some outside stimulus, which is not taken into affect by the equations. This is what you said in your last paragraph about the AC turning on.

I did find some simple equations that add this feature.
http://www.cs.unc.edu/~tracker/media/pdf/SIGGRAPH2001_CoursePack_08.pdf

Take a look at page 32 in the PDF (page 30 at the bottom of the page). Q is the process noise covariance. I think this means "how noisy is the variance because of outside influences".

By setting Q to 0.01, this allow the Kalman filter to track nicely with step functions.

Thank you again for getting me started and over the first 10 hurdles.

Lunkwill said...

Thanks for the update, Robert. I've always wanted to add more to this post to get beyond the trivial example. Want to write it?

amine said...

Really nice example. Thank you

lifeisinfinity said...

Very usefull we look forward for another example on Kalman

Anonymous said...

Thanks for the useful article. I have a question: when you say "We think temperature of the room is about 72 degrees, plus or minus 2 degrees" does it mean our data certainty is +/- 2 degree or the changes of the temperature in this room is +/-2 degree?

Anonymous said...

72 +/- 2 degrees means we think the actual temperature of the room is between 70 and 74 degrees.

You mentioned "change in temperature", which makes me think you're asking about a room whose temperature is moving up or down. But the example I gave is for a room whose temperature never changes.

You have to use the more complex forms of Kalman filters to accurately measure systems that are changing.

Maxime Garcia said...

For Anonymous :

In this situation, you assume the temp room is fixed and never change.
And you read several values off the thermometer.

The point is considering this serie of values to guess the temp of the room with an estimation of accuracy.

Take a basic way to guess the real temp : the therm confidence is -5°/+5°.
So if you take 1000 reads, with a mean calculation, you find the actual temp.

The Kalman filter in this case gives you a quicker way to figure out and also give a variance for the result.
You start with an estimation (temp & variance) and iterate.

For this example to be better, I would start with an estimation human made : 70° +/- 10° and show the convergence through several iteration.

(a quick note: a thermometer that display 71° for a real temp of 72.874° would display that even for 1000 reads. In your mind, think you add an another similar thermometer and read it (it gives you a different measure)).

MJ said...

Very nice simple example, but if the
weight = temperature_variance / (temperature_variance + thermometer_variance),

this should not rather say
0.138 = (2)^2 / ((2)^2+(5)^2)

than

0.28=2/(2+5) ??

I think there is a confusion between the variance, not the standard deviation.

Roel Baardman said...

I'm wondering why Kalman uses the variance and not the standard deviation for the weighted average between the prediction and measurement?

Cephas Atheos said...

This is one of the most helpful and sensible and understandable introductions to Kalman filtering I've found in weeks. I thought I could do it by trying to understand all the covariant es and noise functions and so on, but I was dead wrong.

I'm starting to relearn linear algebra and matrices from scratch so I can go on to the next step... Hopefully you'll be able to write an equivalent matrix (I.e. multivariable) intro soon.

Thanks for taking the time to explain so this dumbarse can get it!

Ashkan Rashedi said...

I love the simplicity, Reading this at midnight in my office at University, after so much headache from reading more than 10 different super complex publications and thesis and not understanding kalman , this like a headache pill. Thank you, I am using it for a quadcopter sensor reading

Ashkan Rashedi said...
This comment has been removed by the author.