FLL design theory...

This post is targeted more to coaches and FLL participants. I'm curious about your plan of attack for the challenges in this year's FLL events. Most of the NXT bots I've seen to this point do NOT use sensors (other than the built-in rotation sensors in the motors). My question is this: are you choosing to not use sensors for any particular reason (difficult to program, unsure how to use properly, etc.) or have you just not found a use for them in the challenges?

Are any of the teams out there using sensors? Without giving away your secrets, are you finding them useful for successfully completing one or more challenges?

Many of us in the NXT community are simly curious about how FLL participants are doing when it comes to programming... we'd enjoy hearing your comments.

Comments

David Levy said…
I spent two entire sessions at our school club having the students calibrate their sensors to the room's ambient light and focus entirely on navigation using the light sensor on the FLL NANO Challenge mat ( notice all the black lines?).

This was very easy for them to do because they already had gone through the Robot Educator tutorial that includes missions to "stop on a black line" as well as line following. I simply told them to refer to the Robot Educator tutorial for examples. Most of them were able to create programs that worked on the table. One child even created a line following routine that drove right to the space elevator from home base.

Unfortunately, with the exception of one team using a single "stop on black line" for the dirt trap, I don't see any use of the light sensor with their current FLL mission solutions. This may in part be due to the accuracy that can be achieved by simple dead reckoning techniques.
Anonymous said…
Early on, we tested robots with ultrasonic sensors, touch sensors and light sensors. The way the team ended up doing the missions, there didn't seem to be much use for the ultrasonic and touch sensors. We are using light sensors, but it appears there are some "issues" with the NXT-G software that prevent the light sensors from working as well as we would like. The team has found ways to work around them, but add it to the list of shortcomings of the NXT-G software!
Brian Davis said…
What are the "issues" with the light sensors that you mention? Can you explain or elaborate more? I've wondered why line-following (esepcially on the nano-mat isn't more popular. I'm starting to think that there may have to be harder challenges for FLL inthe future to encorage the use of sensors. Part of the issue is that the FLL challenges are often static... for instance, you *know* where a certain element is. That doesn't have to be the case.

As a simple example, cooperative tasks between the two playing fields can easily be arranged where one task has to be done if you are the "first" to the center, while a different action or task has to occur if you are "second" to the center... and the number of points you get depends on how rapidly the cooperative task is completed. Now, each team has an incentive to cope with whatever the situation is at the center, as fast as possible (i.e., not waiting for a later run and switching programs to cope with what the other team has done).

Another example: picture a field with one element in one of two positions selected *after* the robot has left home base. The robot has to dynamicly adapt to the conditions on the playfield under autonomous control... putting the intelligence back on the robot, and moving it off the team.

--
Brian Davis
David Levy said…
I like the example of a playing field with an element on one of two positions. Another variation could be that a few elements vary by about 6 inches from mat to mat. Teams would be allowed to take measurements prior to a round but would not be allowed to run tests with the robot on the table. This would make it slightly more difficult to use a dead reckoning solution.
I think you can still use dead-reckoning with a fair bit of accuracy even with a slight variation on the location of parts... that's just got to be taken into consideration when programming. I think a combination of MOVE blocks (set to use rotation or degree values), paired with either the Touch or Ultrasonic sensor, could be used to accomplish all of the challenges.

My problem with the Light sensor and the "dark" areas on the table is that the dark area doesn't give any indication of relative position to a challenge - I mean that when the LS detects the black line, for example, you have no accuracy that you're hitting the black line from a perpindicular direction or from an angle... and this would make it difficult to use the LS to determine accurate locations on the table. Maybe I'm wrong...

The Sound sensor is completely useless in this type of competition where the sound level is always LOUD.

Jim
David Levy said…
I've been encouraging the students to try a combination of techniques for a single mission. For example dead reckoning can be used to establish a proper bearing on a mission but then the light sensor could be used as a trigger to stop or change direction.
Anonymous said…
Hi

Our FLL team is using the following sensors for our NXT robot for this years Nano challange: 3 x Rotation sensors, 1x touch sensor, 1 x light sensor as well as one ultrasonic sensor. The ultrasonic sensor is being used to follow at a programmed distance along one of the walls of the table. The light sensor is used several places to detect the black lines on the field mat. The touch sensor is being used to detect that the robot is in close contact with a certain challange model.
So for our team we can only say that we are actually using all the allowed sensors in order to find our way arround on the challange table. Current status is that the team are able to make all 400 points in arround 2 minutes. It does however require that the compeeting team release the space elevator as well.
Anonymous said…
Another example: picture a field with one element in one of two positions selected *after* the robot has left home base. The robot has to dynamically adapt to the conditions on the playfield under autonomous control...
Such as the bus tickets during no limits year. I also think that variation is a good idea. Otherwise teams can program their robots every move - with accuracy, it would work - and there would be no autonomos behavior.
Anonymous said…
We haven't even started programming yet, I'm starting to worry a bit. Being a rookie coach with a rookie team, we're all a bit lost.
Anonymous said…
I'm coaching 2 teams. The older Team (Gr 7/8) are using sensors, while the younger team (Gr. 5/6) is using the motor rotations alone right now.

The sensors are easy enough to understand and use, but the younger group has seen no need for it ... yet! This may change real soon now.

We are using two light sensors and two touch sensors on our robot. We gave up on using the ultrasonic sensor because of problems with echos around the corners of the board (Aim the ultrasonic sensor at a corner while rotating the robot a bit and you will see what I mean ;)

Paul Tan.
Co-Coach of St. Clement's Lego Robotics
Toronto, Canada
Anonymous said…
My team is made of 9 and 10 year olds, with one 12 year old. They have had most success with the rotation sensors. I think that the NXT makes it so easy to use odometry that it becomes difficult to think in terms of other sensors.
They've finished most of the missions (except the table mission), and putting on other sensors for increased accuracy is not a higher priority for the robot. Of course, higher than that is getting ready for the research and the technical presentations.
Anonymous said…
(except the table mission)

Although this has no relevance to the direction of the discussion here, the term you used "table mission" brought this thought to my head.

Our team has dubbed "the table mission" or "individual atom manipulation" TOM. It is an acronym for Table Of MAYHEM. Partly because it is so annoying to set up, and partly because one of our mentors is named Tom:-)

Does anybody else have some interesting names for some of the missions on this year's table?
Anonymous said…
I have a second-year team. They are using the built-in rotation sensors, as well as a touch sensor which is used to orient the robot against the bumpers. They did some work with the light sensor but eventually decided it was unnecessary for their chosen approaches.

They also encountered the difficulty which Jim identified with the light sensor - using a single light sensor doesn't allow the robot to identify it's precise location along the line. Using two light sensors might help them out with that, but my team is simply not there yet. Dead reckoning was easier.

We had a practice scrimmage with two rookie teams a couple of days ago - one of those teams used dead-reckoning for every mission they attempted, and the other used the touch sensor to orient the robot against one of the mission models.

Doreen, in Toronto
Anonymous said…
Our team consists of children of every age between 9 and 14. We have tried and tried to find uses for the sensors but have found none that improve our accuracy or reduce time. We are using the built in rotation sensors and the nxt buttons. Last year we used a rotation sensor for distance and touch sensors to change programs so what we are doning this year is very similar.

Jon T

Popular Posts