FLL design theory...
This post is targeted more to coaches and FLL participants. I'm curious about your plan of attack for the challenges in this year's FLL events. Most of the NXT bots I've seen to this point do NOT use sensors (other than the built-in rotation sensors in the motors). My question is this: are you choosing to not use sensors for any particular reason (difficult to program, unsure how to use properly, etc.) or have you just not found a use for them in the challenges?
Are any of the teams out there using sensors? Without giving away your secrets, are you finding them useful for successfully completing one or more challenges?
Many of us in the NXT community are simly curious about how FLL participants are doing when it comes to programming... we'd enjoy hearing your comments.
Are any of the teams out there using sensors? Without giving away your secrets, are you finding them useful for successfully completing one or more challenges?
Many of us in the NXT community are simly curious about how FLL participants are doing when it comes to programming... we'd enjoy hearing your comments.
Comments
This was very easy for them to do because they already had gone through the Robot Educator tutorial that includes missions to "stop on a black line" as well as line following. I simply told them to refer to the Robot Educator tutorial for examples. Most of them were able to create programs that worked on the table. One child even created a line following routine that drove right to the space elevator from home base.
Unfortunately, with the exception of one team using a single "stop on black line" for the dirt trap, I don't see any use of the light sensor with their current FLL mission solutions. This may in part be due to the accuracy that can be achieved by simple dead reckoning techniques.
As a simple example, cooperative tasks between the two playing fields can easily be arranged where one task has to be done if you are the "first" to the center, while a different action or task has to occur if you are "second" to the center... and the number of points you get depends on how rapidly the cooperative task is completed. Now, each team has an incentive to cope with whatever the situation is at the center, as fast as possible (i.e., not waiting for a later run and switching programs to cope with what the other team has done).
Another example: picture a field with one element in one of two positions selected *after* the robot has left home base. The robot has to dynamicly adapt to the conditions on the playfield under autonomous control... putting the intelligence back on the robot, and moving it off the team.
--
Brian Davis
My problem with the Light sensor and the "dark" areas on the table is that the dark area doesn't give any indication of relative position to a challenge - I mean that when the LS detects the black line, for example, you have no accuracy that you're hitting the black line from a perpindicular direction or from an angle... and this would make it difficult to use the LS to determine accurate locations on the table. Maybe I'm wrong...
The Sound sensor is completely useless in this type of competition where the sound level is always LOUD.
Jim
Our FLL team is using the following sensors for our NXT robot for this years Nano challange: 3 x Rotation sensors, 1x touch sensor, 1 x light sensor as well as one ultrasonic sensor. The ultrasonic sensor is being used to follow at a programmed distance along one of the walls of the table. The light sensor is used several places to detect the black lines on the field mat. The touch sensor is being used to detect that the robot is in close contact with a certain challange model.
So for our team we can only say that we are actually using all the allowed sensors in order to find our way arround on the challange table. Current status is that the team are able to make all 400 points in arround 2 minutes. It does however require that the compeeting team release the space elevator as well.
Such as the bus tickets during no limits year. I also think that variation is a good idea. Otherwise teams can program their robots every move - with accuracy, it would work - and there would be no autonomos behavior.
The sensors are easy enough to understand and use, but the younger group has seen no need for it ... yet! This may change real soon now.
We are using two light sensors and two touch sensors on our robot. We gave up on using the ultrasonic sensor because of problems with echos around the corners of the board (Aim the ultrasonic sensor at a corner while rotating the robot a bit and you will see what I mean ;)
Paul Tan.
Co-Coach of St. Clement's Lego Robotics
Toronto, Canada
They've finished most of the missions (except the table mission), and putting on other sensors for increased accuracy is not a higher priority for the robot. Of course, higher than that is getting ready for the research and the technical presentations.
Although this has no relevance to the direction of the discussion here, the term you used "table mission" brought this thought to my head.
Our team has dubbed "the table mission" or "individual atom manipulation" TOM. It is an acronym for Table Of MAYHEM. Partly because it is so annoying to set up, and partly because one of our mentors is named Tom:-)
Does anybody else have some interesting names for some of the missions on this year's table?
They also encountered the difficulty which Jim identified with the light sensor - using a single light sensor doesn't allow the robot to identify it's precise location along the line. Using two light sensors might help them out with that, but my team is simply not there yet. Dead reckoning was easier.
We had a practice scrimmage with two rookie teams a couple of days ago - one of those teams used dead-reckoning for every mission they attempted, and the other used the touch sensor to orient the robot against one of the mission models.
Doreen, in Toronto
Jon T