Dexter 6DOF-IMU
6DOF-IMU you say??? What's that?
Well the folks over at Dexter Industries look like they are putting the finishing touches to a new sensor. The IMU stands for Inertial Measurement Unit and the 6DOF means 6 Degrees Of Freedom. What does that mean in normal speak?
Well in layman terms the sensor will be able to not only be able to measure movement in the left/right forward/backward and up/down directions, but will also be able to measure the amount of rotating in all the same directions (can anyone come up with a better explanation?)
Xander over at 'I'd Rather be Building Robots' has a prototype (complete with ridiculously bright LED) and is putting it through it's paces. Can't wait to see what's possible with this sensor!
Well the folks over at Dexter Industries look like they are putting the finishing touches to a new sensor. The IMU stands for Inertial Measurement Unit and the 6DOF means 6 Degrees Of Freedom. What does that mean in normal speak?
Well in layman terms the sensor will be able to not only be able to measure movement in the left/right forward/backward and up/down directions, but will also be able to measure the amount of rotating in all the same directions (can anyone come up with a better explanation?)
Xander over at 'I'd Rather be Building Robots' has a prototype (complete with ridiculously bright LED) and is putting it through it's paces. Can't wait to see what's possible with this sensor!
Comments
...does it have good enough stability?
...does it have good enough resolution?
...does it do the integration on-sensor, or do you have to have the NXT do it (the problem here is not just resources on the NXT, but how rapidly it can "ping" the sensor for new accelerations)?
I'm *very* interested in this sensor, but it would be really interesting to see a detailed review of it.
I don't think there's angle integration with as low drift as in the Cruizcore.
If the sensor is moving at constant velocity and at the Earth's surface, then the accelerations could be used to deduce the direction of gravity (the only other acceleration in this case). If the sensor is changing velocity and near the Earth's surface, then the reported accelerations are a combination of gravity and any other accelerations of the robot.
To make matters more complicated, if the sensor reports an acceleration of, say, 3 m/s^2 in the +X direction, that doesn't tell you how your velocity over the surface has changed... because the +X axis of the sensor will be tired to the robot, while the robot may be at some unknown orientation on the surface. So cumulative errors in the position angle (due to slight errors in the gyro return, and errors in integration) end up causing errors in determining the linear accelerations as well, amplifying the error in the velocities, an error which is amplified still more when integrated up to velocities.
Yes, inertial measurement is *complicated*. Accuracy depends on a hoard of things, and the more frequently the linear & angular accelerations the better (but you need to use a reasonable integration strategy as well).
This is a really cool sensors - a 6 DOF IMU is something I've wanted to play with on the NXT for some time. But I'd be very surprised if this suddenly allowed easy spacial localization for small LEGO robots (although I'd be VERY happy to be disproved!).
The thing you measure (at least for 3 of the 6 DoFs) is acceleration. As we're most likely using this sensor on earth, it will "feel" the effects of gravity. What the sensor outputs is its current acceleration, and unfortunately (or luckily, depends), it has the gravitational acc. vector superimposed (to its own acceleration in the earth reference frame).
So sometimes you can use this information, i.e. to have a tilt sensor. Or to detect free fall (gravitational acc. seems "gone"). But when you get those 2 things combined (your object doing some sort of motion on its own + gravity), it can be hard to separate those 2 forces again.
I've tried to integrate the values of the HiTechnic accelerator before. I put it "level" on my desk and calibrated it, i.e. so that it's current orientation is "even/level". Then I just moved it to the right a bit. The resulting plot can be seen here. It shows speed, and you can nicely identify the problem: The speed doesn't return to 0 after integration -- the samples don't cancel each out perfectly. No imagine how bad this gets when performing a 2nd integration.
Regards, Linus
The problem is at least partially how you are doing it. You can sample the HiTechnic sensor at best about once every 20 ms... meaning you have no idea what's happening in between, and all the measurement times are at best +/- 20 ms or so. Then there's the integration technique used... straight Newtonian? Leapfrog? Runge-Kutta? A lot of simple techniques just point out how the problem shouldn't be handled... not that it's intractable.
This is why I'm curious what the *sensor* does, before ever sending anything to the NXT. If the NXT has to constantly sample the sensor, and then laboriously spend cycles crunching the numbers (especially in a language as unsuited to it as NXT-G)... not only will it end up inaccurate, but too busy to do anything else.
If instead the *sensor* has a small processor dedicated to this task, things become much more reasonable. The NXT doesn't have to constantly keep track of everything (or do the integrations) if the sensor is 'smart' enough to handle it all behind the scenes...
I didn't do the integration in real-time, but I logged the values for later offline processing. One thing that can be handy is my NXC suite for "Datalogging to memory", so that there's not latency. After program finish, a CSV file is created and written to flash. If anybody is interested: LogToMemory.nxc.
I used everything from MATLAB's bag of tricks for integration, and tried some simple Runge-Kutta and Trapezoid methods. Interestingly, the results didn't differ that much. I'd really love to show you the graphs (different colors for different integration methods), but I can't find the or maybe I didn't save them :-(.
My explanation back at the time was, as you said: One has no idea what's happening between two measurements! Also I guess that my table wasn't perfectly horizontal, so during my movement, I sooner or later started integrating some gravity...
The thing is, this is commonly known stuff - it's not like functional IMUs are new technology, they've been used in planes for instance for some time, rockets, etc. Yes, this was all long before GPS... because GPS was difficult, and IMUs were well understood. Now they've been miniaturized. And gotten significantly less expensive. And lower power. But the NXT, with slow calculations and even much much slower I2C speeds, really can't take advantage of them properly.
As to the table issue - that actually can be addressed. Now and then, you calibrate your IMU. For a simple NXT robot. Stop the wheels, and wait, say, 2 seconds, then take 1 second worth of data. Due to the physics of the situation, you *know* the robot is stationary during that time - so the only acceleration present is gravity. That will have a vector direction, which you then can subtract out of your data under "normal" conditions. You probably have to do this repeatedly, because different spots on the table (or floor) will have slightly different angles... and if the surface is rough, you may have to use more sophisticated leveling tricks (like doing this half a dozen times with very small movements in between to judge the surface slant instead of just the very small scale surface roughness). All these things can be done... but it's not as simple as sticking an IMU on, integrating, and thinking that's going to work. That last mindset is what I'm most concerned with on sensors like these... as I still spend a lot of time answering really simple misunderstandings about the acceleration sensor.