Guest Blog - Behavior-Based Programming
Received the following from Clinton:
I was asked one day, "How do you make a robot do two things at once"? It sounds like something that should be simple, but it really isn't. Even if you can get two or three things to work together in your program, trying to add more typically breaks your code and is a nightmare to debug.
In 1986, roboticists at MIT were having the same problem, and they came up with a technique that is sometimes called behaviour-based robotics. It allows you to create a bunch of simple behaviours for your robot to exhibit, and to determine when a given behaviour has control of the robot. For example, a robot may have a basic behaviour of wandering around a room, and a higher-priority behaviour of backing up and turning when it bumps into something. The beauty of the system is that each individual behaviour is fairly simple, can be debugged, and you can build up your program, in a modular fashion, from something simple to something complex and nuanced.
I have created a 45-minute-long video tutorial that discusses what behaviour-based programming is, how to use these techniques in NXT-G, and how to test and debug the robot as the program iteratively becomes more complex.
http://robotclub.ab.ca/articles/12/behaviour-based-wall-follower-in-nxt-g
I was asked one day, "How do you make a robot do two things at once"? It sounds like something that should be simple, but it really isn't. Even if you can get two or three things to work together in your program, trying to add more typically breaks your code and is a nightmare to debug.
In 1986, roboticists at MIT were having the same problem, and they came up with a technique that is sometimes called behaviour-based robotics. It allows you to create a bunch of simple behaviours for your robot to exhibit, and to determine when a given behaviour has control of the robot. For example, a robot may have a basic behaviour of wandering around a room, and a higher-priority behaviour of backing up and turning when it bumps into something. The beauty of the system is that each individual behaviour is fairly simple, can be debugged, and you can build up your program, in a modular fashion, from something simple to something complex and nuanced.
I have created a 45-minute-long video tutorial that discusses what behaviour-based programming is, how to use these techniques in NXT-G, and how to test and debug the robot as the program iteratively becomes more complex.
http://robotclub.ab.ca/articles/12/behaviour-based-wall-follower-in-nxt-g
Comments
Yes, leJOS NXJ does indeed have a nice behaviour-based API. (You guys have done excellent work on it, by the way). That was where I got the idea to simplify the control and have a behaviour not just drive one end effector -- the screen, the drive motors, or the speaker -- but to have complete control over the robot.
[I'm afraid your link to the according trail doesn't work. Did you mean this leJOS NXT behaviour programming tutorial page?].
Cheers,
Clinton
BTW, as an alternative to putting each Behavior Trigger in its own NXT-G sequence, you could also put them all together in one sequence (since each sensor test is very fast), for 2 sequences total, which would make the program somewhat easier to edit and deal with, and I don't think any less modular really, given the way your Arbiter works. This way you could also skip the logic variables, put the sensor tests in priority order in their sequence and have them just set the text variable directly (this puts the Arbiter functionality in with the triggers rather than the actions).
As a further option, you could do the whole program in one sequence by just joining the two sequences above (action switch after triggers). Then if there is one particular sensor test or other action that is performance-critical (such as light sensor test when line following), you can split that one off into its own sequence with an intermediate variable (like your Logic variables), which will give it about 50% of the CPU cycles all to itself, for fastest response. Just a fun little FYI for anyone there.
glad to hear you liked the video.
I did consider using fewer sequence beams, and it is indeed a very good way to do it, especially as there would be less coding and the performance would be better.
The chief reason I separated everything is to make it clear that each bit of code is a logically separate unit that can be manipulated independently. (Minor reasons are to allow for the possibility of a test that wasn't nearly instantaneous, such as "Wait for the bumper to be pushed" followed by "set the bumper_triggered variable to true", and, also, to avoid endlessly scrolling horizontally.)
ps im at the ATL airport waiting to get on the plane with wall-3 to return home from FIRST