Particle filter with NXT
Basically the particle filter realizes that you won't have perfect odometery readings all the time, and so uses 3 ultrasonic sensors to 'correct' errors and stay on track.
"Localisation is the process of working out where you are, so localisation for a robot is having the robot work out where it is using the readings from its own sensors. Sounds easy, but it can be surprisingly challenging. The big problem stems from the fact that the robot's wheels have to slip to make the robot move (a fact of physics), so careful measurement of the motors' rotation sensors is not going to help you. Fitting streams of range readings from ultrasound or infrared sensors has problems with noise and error, too. So, over the last decade or so, roboticists have pretty well agreed that the best way to localise is to combine the all of robot's sensor readings over time using probability theory. Sounds challenging, but it can be surprisingly easy.
I've been using a probabilistic filter called a particle filter to localise a robot built around the NXT. The other name for this style of probabilistic localisation is Monte-Carlo Localisation, so called because it relies on finding the best odds from a series of seemingly random outcomes. All of the code is written in Robot-C and runs on-board the NXT in real time. This surprised me. Particle filters are supposed to be horribly computationally intensive with lots of floating point math, so surely you can't expect it to run on a LEGO brick ... but run it did."
He gives a great explanation of particle filters along with video of it in operation.
Source code in RobotC is provided for those that want to look under the hood.
I also read the stuff he is doing with some of the Mindsensors hardware - including the newly released camera sensor.
The wheels may have to slip but exactly how much is the slip over distance traveled. If proper programming and construction techniques are utilized this slippage can be minimized and dealt with. My FLL team's robot is accurate to better than 1mm linearly and .5 degrees rotationally - tested physically and programmatically.
As to a FLL robot being "Accurate to better than 1mm linearly and 0.5 degrees rotationally"... well, I'd be curious to know exactly what you mean by these. For instance, "1mm linearly" has very little meaning to me - does that mean it can navigate to the same location, 10 meters away from the start point, and end up with a pointer in a circle 2mm in diameter in 98 tries out of 100? Or does it mean that in one test, the robot stopped within 1 mm of a line located 10 cm away from the start? Likewise for angular position, how did you measure that small and angle (I survey caves, and the equipment used only reads to the half degree... and the actual accuracy is much less, usually around +/- 2 degrees). Was that an error in the way the robot moved, or in the starting positions? Etc.
Where something like this comes in handy is when you are not sure of the condition you will encounter. By taking in information from a variety of sensors, you can get a 'better' picture of exactly what your robot is doing.
Particle filters are just one way of combining several different types of data into meaningful information