Some thoughts on FLL competition...
Today I had the chance to be a referee for the Forsythe Alliance competition (Forsythe County school district) held in Cumming, GA. 40 teams showed up and each team competed in 6 runs with their scores being added together for a grand total. Competition started at roughtly 9am and ended around 4pm with a 1 hour lunch in between and small breaks. All in all, it was a blast and I got to meet a lot of great kids, supportive parents, and very committed coaches and teachers. This county has really gotten behind their kids... check out more here.
Anyway, here are my overall observations and opinions, none of which are based on scientific fact or any advanced statistical calculations - mainly just guessing and memory:
1. Mission most often attempted - Satellite
The surprising part about this mission wasn't that so many teams tried for it - it was the number of teams that tried and failed! Most teams lined it up using SET (standard eyeball trajectory) and I would estimate that 1 in 3 failed. Very surprising, isn't it? It takes 10-20 seconds of your time, so you should make sure you nail it every time.
2. Mission least often attempted - Car/Truck switch
Again, not surprising considering the work needed to get the points (two conditions must be met), but what was surprising was that the teams who did attempt it (maybe 1 out of every 10) got it to work.
3. Sensors hardly used at all
I saw a total of 30 teams come through my table, but none (ZERO) used sensors for any portion of the competion. Repeat - ZERO. With all the colors on the table, black lines, and angled lines, I thought for certain that at least 1 or 2 teams would use a Light sensor to help in some navigation... maybe the Ultrasonic to detect some of the Uranium or Corn items, but not a single one. Pre-programmed movement was the name of the game.
4. No jigs/templates
Once again, I saw no teams using any kind of template or jig device to line up their robots. A few teams did use the small colored lines that go around the inner edge of the base, but not in a way that I would consider as extremely accurate. For example, one team would line the wheels of its robot up with the left-most colored lines, but they didn't line the front of the robot up with any particular line - the result was that the movement they programmed kept missing the target (Corn) because the robot wasn't placed at the proper North-South starting position (N-S-E-W is written on the table, FYI). They had to run the same program 3 times and used trial and error to get the proper starting position. This should have been determined during testing.
5. Direction sheets
I saw 4 or 5 teams with a small "instruction manual" that they had written that had things on it such as the program name to select for a certain mission, the order they wished to run the missions, etc - one even had a "failover" selection in case one mission failed, they knew which missions NOT to run that might interfere with later missions. Very well thought out. I hadn't seen this before and was surprised. In the heat of the game, these teams were very methodical and didn't panic.
6. Wheel issues
Two teams consistently ran a mission that should have worked (oil rig) but a tire had not been checked before the competition and was either off the rim or not in alignment with the other wheel, causing the robot to rotate. Both times the teams grabbed their bot (penalty oil barrel) and ran it again, same result... grabbed the bot (another penalty) and went on to another mission. Check your tires! This is such an easy fix but can cause HUGE problems.
7. Hastiness
So many teams were stressed and reacting quickly to their robot. This caused many teams to reach and grab their robot (penalty) before it entered the base. One team grabbed the robot as it just entered the base and managed to rip off a large portion of a motor and attachment assembly - they wasted 30-40 seconds fixing it and missed out on 2 missions at least. Slow down - deep breaths... yes, time is ticking away, but hastiness caused too many mistakes for me to remember.
8. Program selection
I can't tell you how many teams wasted precious seconds trying to find the proper program for a mission or combination of missions. Cycling through the program files does take time. The best team I saw that managed this properly by having one teammate finding the program while the other handled adding/removing components. Good teamwork.
9. No RCX
30 teams - all using NXT. I didn't see a single RCX. Of the 10 teams I didn't see, there might have been... not sure.
10. False starts
A few teams were penalized because they pushed the start button before the competition countdown completed. We were very strict with this because so many teams were racing for the satellite and it had to be fair. Don't push your start button until the countdown hits zero.
Anyway, here are my overall observations and opinions, none of which are based on scientific fact or any advanced statistical calculations - mainly just guessing and memory:
1. Mission most often attempted - Satellite
The surprising part about this mission wasn't that so many teams tried for it - it was the number of teams that tried and failed! Most teams lined it up using SET (standard eyeball trajectory) and I would estimate that 1 in 3 failed. Very surprising, isn't it? It takes 10-20 seconds of your time, so you should make sure you nail it every time.
2. Mission least often attempted - Car/Truck switch
Again, not surprising considering the work needed to get the points (two conditions must be met), but what was surprising was that the teams who did attempt it (maybe 1 out of every 10) got it to work.
3. Sensors hardly used at all
I saw a total of 30 teams come through my table, but none (ZERO) used sensors for any portion of the competion. Repeat - ZERO. With all the colors on the table, black lines, and angled lines, I thought for certain that at least 1 or 2 teams would use a Light sensor to help in some navigation... maybe the Ultrasonic to detect some of the Uranium or Corn items, but not a single one. Pre-programmed movement was the name of the game.
4. No jigs/templates
Once again, I saw no teams using any kind of template or jig device to line up their robots. A few teams did use the small colored lines that go around the inner edge of the base, but not in a way that I would consider as extremely accurate. For example, one team would line the wheels of its robot up with the left-most colored lines, but they didn't line the front of the robot up with any particular line - the result was that the movement they programmed kept missing the target (Corn) because the robot wasn't placed at the proper North-South starting position (N-S-E-W is written on the table, FYI). They had to run the same program 3 times and used trial and error to get the proper starting position. This should have been determined during testing.
5. Direction sheets
I saw 4 or 5 teams with a small "instruction manual" that they had written that had things on it such as the program name to select for a certain mission, the order they wished to run the missions, etc - one even had a "failover" selection in case one mission failed, they knew which missions NOT to run that might interfere with later missions. Very well thought out. I hadn't seen this before and was surprised. In the heat of the game, these teams were very methodical and didn't panic.
6. Wheel issues
Two teams consistently ran a mission that should have worked (oil rig) but a tire had not been checked before the competition and was either off the rim or not in alignment with the other wheel, causing the robot to rotate. Both times the teams grabbed their bot (penalty oil barrel) and ran it again, same result... grabbed the bot (another penalty) and went on to another mission. Check your tires! This is such an easy fix but can cause HUGE problems.
7. Hastiness
So many teams were stressed and reacting quickly to their robot. This caused many teams to reach and grab their robot (penalty) before it entered the base. One team grabbed the robot as it just entered the base and managed to rip off a large portion of a motor and attachment assembly - they wasted 30-40 seconds fixing it and missed out on 2 missions at least. Slow down - deep breaths... yes, time is ticking away, but hastiness caused too many mistakes for me to remember.
8. Program selection
I can't tell you how many teams wasted precious seconds trying to find the proper program for a mission or combination of missions. Cycling through the program files does take time. The best team I saw that managed this properly by having one teammate finding the program while the other handled adding/removing components. Good teamwork.
9. No RCX
30 teams - all using NXT. I didn't see a single RCX. Of the 10 teams I didn't see, there might have been... not sure.
10. False starts
A few teams were penalized because they pushed the start button before the competition countdown completed. We were very strict with this because so many teams were racing for the satellite and it had to be fair. Don't push your start button until the countdown hits zero.
Comments
Thanks a ton,
Ethan Steckmann
The part I fail to understand about this mission is why it fails so much. Even using point and shoot methods, this is nearly always the first mission to be run... meaning the teams have loads of time to set it up! Our team (which I can brag about at great length if needed) did this mission first, and used mostly odometry to complete it (for speed)... and yet we never missed. We had a great couple of kids handling the robot, and they would always line it up exactly the same way.
3. Sensors hardly used at all
I just find this sad. Rotational navigation has it uses, and there are teams like Jonathan's that are really good at it. But I find that you can get so much more reliability and more speed if you use sensors.
1. Mission most often attempted - Satellite
We got this mission every time and it took only 8 seconds
2. Mission least often attempted - Car/Truck switch
We did attempt this mission, and it worked 66% of the time (we made the whole mission the night before competition)
3. Sensors hardly used at all
The night before competition we had 4 sensor (2 bump, 2 light) but we had to cut those missions because the didn't get enough points for the time spent on them
5. Direction sheets
We had these too except we had ours memorized
6. Wheel issues
We had no such issues except one gear-skipping incident
7. Hastiness
We have 30 sec. left over when we run our missions efficiently so we don't have to be hasty
10. False starts
This didn't happen to us
Our team, The A.E.R.O. Cows, #86, got 275 top score at the competition. That combined with our excellent presentation (though I do say so myself) was enough for a ticket to state. (Champions Award, 2nd place)
Go cows!
I just find this sad."
Indeed.. very said. Some may tout the KISS method but I would argue that this is nothing more than a hack considering the mad rush of having to complete these challenges in such a short period of time. It is indeed a shame that the competition aspect has overshadowed the education aspect of the FLL.
So much so that the expectations of some teams to score 400 may entice parents to have undue influence in the performance of the robot.
sorry to be negative... this comment is not specifically directed at anyone.. Certainly not any of the honorable contributors on the blog.
1. Satellite - They got the satellite first every match except one (the announcer talked so long before the match that the robot turned itself off, and the kids didn't notice until they pushed the run button and nothing happened). They combined this mission with the tree misson, doing both in about 3 seconds.
2. Car/Truck - They got the truck back to base every time, and only failed to get it to the farm once (the rubber tires stuck on the mat and the truck spun around). They delivered the car every time, and combined that with delivering the wave turbine to the ocean.
3. Sensors - They only used the built-in rotation sensors. The robot has to go more slowly to detect lines (and not overshoot them), and the team decided that speed was more important. They were concerned that the ultrasonic sensor could get false readings if the other team used one on the adjacent table, and decided not to risk it.
4. Jigs - They used a triangular alignment jig in the NW corner to line up the robot for the truck grab, and another triangular jig in the NE corner to line up for the uranium grab. The wall was used for alignment in the oil platform mission.
5. Direction Sheets - The programs were named 1-TreeSat, 2-TruckIn, etc. on the NXT, but the kids had memorized the mission order.
6. Wheel Issues - No issues.
7. Hastiness - They worked very quickly, but had practiced the choreography so much that most times they had 10-20 seconds left after completing their "goal" missions.
8. Program Selection - Programs were numbered, then loaded 6, 7, 8, 9, 10, 11, 5, 4, 3, 2, 1. This makes 1 the default selection, with programs 2-5 accessed with the right arrow, and programs 6-11 with the left arrow (fewest button pushes).
9. NXT used.
10. No false starts.
The team averaged over 300 points for the 9 matches they competed in. Their "goal" score was 365, which they got 3 times, and their high score was 380 (had time to do the solar panel). They won first place in the robot performance, and the first place Champion's Award (invited to World Festival!).
The kids are all fifth graders. They used very simple forward/back programs for most missions, with a third motor for grabbing arms. I admire the more complicated robots and programs from other teams, but I think they are mostly from older students, and from veteran teams. I am a veteran coach, but I get all new team members each year, so they have to learn from scratch. I know that their simple solutions work, and that they did the work themselves.
That is great to hear about your team! When you guys are at the Dome in Atlanta in April, please come by the LEGO booth and say hello - I would love to meet your team.
Jim
I would say an equal amount of teams tried the solar panel as well. I particularly like having to bring the Truck to base because of the forces it can exert on robots.
3. Sensors hardly used at all
I would break this into a few points:
-If teams do not have good odometry navigation, sensor navigation will be difficult.
-Sensors, when not integrated with proper mechanical structure/programming can be unreliable.
-Places to use sensors have to be picked very carefully. There is a large number of tasks this year, so time is of the essence. In general, using sensors takes time.
-Motor encoders can be used for purposes other than odometry. My team utilizes complex control which makes our robot very reliable.
-Top teams, for the most part, still use sensors. We use 5 sensors (including 3 motor encoders) and score 400 points reliably in tournaments - we did so 5 times at our local tournament in November.
4. No jigs/templates
Again, the top teams will use jigs and/or have flexible/easy alignments.
We personally had no problems with the other issues.
I think the main changes in FLL are: 1) There are more rookie teams, pronouncing some common mistakes. 2) Even veteran teams are relying on the motor encoders too much and/or not properly using them. 3) Many teams are struggling with proper robust studless construction. 4) NXT-G is limiting team's thinking processes.
If you look closely, you will see similar techniques being used as with the RCX in the past, at least in the top teams.
Josh
>It is indeed a shame that the
>competition aspect has
>overshadowed the education aspect
>of the FLL.
I think you're right that using only rotation sensors provides less education. However, that's really FLL's problem. You can't expect teams to purposefully do worse in their performance just to get more education out of the competition. FLL could, however, easily make the challenges such that it would be worth teams' while to use other sensors. For example, they could have objects be placed randomly on the field which the robot wouldn't be allowed to touch - that would be a great twist to the challenge, IMO.
-Jonathan
I think the original post was a very good description of our competition.
I agree that MOST of the teams used a pre-determined process and stuck to it. Our group did make a few changes that didn't work so well but they learned from it and went back to the "plan" which worked just fine.
The other thing I noticed was time management was not employed well by the teams that were at the bottom of the list. Attachment swaps often cost time enough to attempt at least one more possibly two more missions.
The kids seem GREAT at the engineering and programming, I'm not so sure about strategy and big picture. That's where I see adults really being a help.
The way I understand it, this is just the third year for our county to do this and we had 40 teams for this program and 30% of the participants were girls!. I guess you can tell I'm excited and hope to help out.
Thanks for giving me a forum to express my thoughts.
And they have done this before... in City Sights, it was a black-and-white mat, with lines leading (I think) every mission, rewarding teams that used light sensors. Same goes for every year before that, but then colored mats were introduced, and the lines became more generic, and not so purposeful. But I don't think the percentage of teams using touch sensors have change much (at least, I can't think of a reason).
Richard
I think this was the FLL's intention. That is, to make the challenges more difficult in this manner. IMO this only served to make to it more difficult for the officials. The teams however could ultimately lean on their parents and coaches to cone up with the best strategy.
The only reasons not to use sensors are ignorance and laziness. FLL is all about fixing both.
I disagree with this statement. To say that is to call some excellent teams ignorant and lazy. It also degrades new teams who do not have the technical experience to do anything else. There is not an "only reason" not to use sensors, there are myriads of them!
For example, my team heavily relies on other sensors, but also uses a good bit of odometry to navigate. One of our most heavily scoring programs (over 100 points) relies completely on dead reckoning to reach it's destination. We do this because we spent hours designing our attachment and it's cargo so that it would go dead straight. The result is a mission that worked every time at our tournament. There was education involved. One does not spend weeks refining an attachment and not learn anything. To use other sensors would have been overkill, over complex, and unnecessary.
I refuse to say that using rotation sensors is a bad thing. I will say that relying on them when better results could be garnered by using other sensors is a practice of negligence on the part of an experienced team.
Sensor use is an implied requirement. Building robots without sensors is no different than building a robot that doesn't fit in the base. Why would a team make this decision? They must be lazy or ignorant.
I'm thinking ignorant is most likely. Maybe they don't realize they are skipping a requirement. Moving the requirement from the scoring rubric to the rules would go a long way towards fixing that.
But I know many teams choose not to use sensors because they don't see the benefit. These teams are lazy. They have a bag of tricks that worked last year, and don't see a benefit to learning something new. Unfortunately their laziness was rewarded with a high scoring robot, and the penalty was just a few points off in technical judging. For these teams, a stiffer penalty may be all that is needed for them to mend their ways.
Were most teams using NXT-G and not Robolab?
One of the things my team liked to say in our Technical judging sessions last year was an analogy about engineers. Suppose an engineer's boss wants him to make a new product that completes certain tasks. Now, let's suppose the engineer finds a simple, efficient, fast, and low-cost design. Is his boss going to say, "Hey! That design should be more complicated and use some more expensive equipment!" Of course not! If anything he'll commend him for finding such a simple solution! We decided to compete as if we were real engineers - since we found a simple, fast, and extremely consistent solution to the FLL challenge, it wouldn't make sense to try to make it more complicated and slower.
Now, some coaches start teams for the main purpose of teaching them new concepts - that's fine! They might not do so well in the competition, but they'll be fulfilling their personal goal. But the FLL competition itself is not a competition about knowledge of the NXT - it's a competition to see who can come up with the best solution to an engineering problem.
>The only reasons not to use
>sensors are ignorance and
>laziness.
Absolutely not! My team definitely wasn't ignorant of how to use other sensors last year (look for my ScanBot in the Idea Book for evidence of this), and we certainly weren't lazy - we worked hard just about every day the competition was going. The reason we used rotation sensors only was because we could see that they made for a faster, simpler, and more efficient robot than one that used other sensors. And our idea paid off when we became the second team (along with the Mindstorm Troopers) in the history of the World Festival to get all perfect scores in the three competition rounds.
>Building robots without sensors
>is no different than building a
>robot that doesn't fit in the
>base.
Yes it is different... there's a rule against building a robot that doesn't fit in base, while there are absolutely no rules, implied or implicit, against using a robot with just rotation sensors. If you think there is, please quote the rule first.
Now, if FLL wants to change the challenges such that it's more efficient to use sensors, that's fine - I think it would be a great idea. But with the current type of challenges, they shouldn't penalize teams for coming up with better (or even "perfect") solutions.
-Jonathan
Wow! Now that's what I call "tough love". I think we would all be better off if we followed your advice.
However I doubt that's going to happen because I would think that proponents of the "bag of tricks" method will just criticize your advice with clever catch phrases like:
- ad nauseum
- KISS
- a solution in search of a problem
I think that the onus should be on FIRST to come up with challenges that force teams to use more sensors or other techniques that fosters continued learning.
Maybe they don't.
> some coaches start teams for the main
> purpose of teaching them new concepts
> - that's fine! They might not do so well
> in the competition, but they'll be fulfilling
> their personal goal. But the FLL competition
> itself is not a competition about knowledge
> of the NXT - it's a competition to see who
> can come up with the best solution to an
> engineering problem.
Hmm, here's my personal take on it. FIRST & FLL is:
NOT ABOUT WINNING THE COMPETITION
From the FIRST website, the mission statement:
"Our mission is to inspire young people to be science and technology leaders, by engaging them in exciting mentor-based programs that build science, engineering and technology skills, that inspire innovation, and that foster well-rounded life capabilities including self-confidence, communication and leadership."
Note that it doesn't refer to getting a perfect score. It doesn't talk about "right" or "wrong" ways to solve challenges. It *does* talk about innovation... which (personal opinion), is not reusing the "same bag of tricks" or dead reckoning, *especially* for experienced teams. It's about innovating and trying something new, *even when the old way still works*. It's about making mistakes in a controlled environment *because you learn from them*. Notice that both "science" and "engineering" are listed in the above. Yes, there are some good techniques that have been previously developed that you should know & understand (such as dead reckoning): I'd classify that as engineering. On the other hand, if you become dependent on those techniques, you will likely fail the first time you are confronted with a problem with new assumptions. For that, you need innovation and the ability to *develop* and *evaluate* engineering techniques: I'd classify that as science. Yes, there's a difference - a very real one that's very obvious if you ever catch a scientist in an engineering classroom (asking annoying questions like "why"?) or an engineer in a science classroom (sometimes asking "who cares? I've no way to apply that.").
Personally, I'd like to see FLL challenges that encourage innovation - but they'll probably result in a lot more disappointed & frustrated kids (& parents!). But one thing I strongly feel gets overplayed is "winning". I don't see that referred to anywhere in the FIRST mission statement.
Is it fun to win? Certainly. Will trying to win drive people to work hard & learn things? Absolutely. Is getting the highest score the goal in FIRST, or even FLL?
My opinion: NO.
PS- The best performance I ever saw was not one designed for winning. It was when a team last year completed the challenge in a single program (with, if memory serves, a single attachment change). They didn't do that because it was the best way to win - they did it, I suspect, because somebody stepped up to a team and drove them to innovate and grow, *even though it had nothing to do with "winning"*.
--
Brian Davis
However, the competition - the thing teams compete to surpass each other in - is not knowledge of the NXT. It's solving an engineering problem. In solving the problem, teams will usually learn about robotics, become interested in Science and Technology, etc., like FIRST wants them to do.
Some people have the opinion that it's not important to win as much as it's important to learn - that's great! Others would rather do better in the competition than learn more from it - it's fine to have that opinion as well. But since the competition does not have "amount of knowledge about the NXT" as one of the judging factors, I don't think teams should be penalized for coming up with better solutions without displaying as much knowledge.
>PS- The best performance I ever
>saw was not one designed for
>winning.
The Flying Geeks? Yeah, they were awesome... that was definitely the most amazing solution I ever saw. Don't get me wrong, I don't think badly of teams that have other goals in the competition than I do. I admire them for having that goal (if that was their goal) and completing it.
-Jonathan
The only reasons not to use sensors are ignorance and laziness. FLL is all about fixing both."
Please also realize that sensors can be used in ways you might not be able to notice. We use the encoders very innovatively in conjunction with other sensors. I would rather see a challenge/challenges on the board that require sensors to complete.
"Were most teams using NXT-G and not Robolab?"
I'm not sure but was under this impression.
Our team never relies solely on odometry. Feedback navigation, landmark navigation, and other techniques are a common sight.
Brian, you say "NOT ABOUT WINNING THE COMPETITION" then "Is it fun to win? Certainly. Will trying to win drive people to work hard & learn things? Absolutely."
So, part of the competition certainly is winning.
“If winning isn't everything, why do they keep score?”
Vince Lombardi
I would be just as happy with a competition where there were no recorded scores. All there would be is how much of a showcase your robot is. Then we would see more attempts at delivering the solar panel while on the east side of the table, boarder-wall climbing robots, and interesting strategies.
"PS- The best performance I ever saw was not one designed for winning. It was when a team last year completed the challenge in a single program (with, if memory serves, a single attachment change). They didn't do that because it was the best way to win - they did it, I suspect, because somebody stepped up to a team and drove them to innovate and grow, *even though it had nothing to do with "winning"*."
Our goal has generally been to complete the challenge as fast as possible. In No Limits we scored 400 in 1:44. In Ocean Odyssey we scored 400 in 1:43. In Nanoquest we scored 400 in 1:29, only making 2 transitions, one more than the team that used 1 program. Our 2 transitions included doing both sides of the space elevator.
I agree the program is about learning, and we certainly learn A LOT. In fact, several of our meetings focus on other topics of interest. You can learn, have fun, and win all at the same time. Learning and having fun help contribute.
I agree with what most of everyone has said here. I'd like to see more sensors and less emphasis on winning - but until they change this things will remain the same. Here is the description of the 'Champion's Award':
*The Champion's Award is the most prestigious award that any team can win. It celebrates the ultimate success of the FIRST mission and FLL values. It measures how the team members inspire and motivate others about the excitement of science and technology, solve problems, and demonstrate respect and gracious professionalism to everyone involved in the competition.*
They say "ultimate success" - sounds like something teams should want to represent.
Josh
In the spirit of the season It is now time for a heartwarming story.
There is this team I remember judging three years in a row. They were easy to remember because they used the same robot each year. It was a simple robot with a rotation sensor and little else. But it was a good mechanical design, very accurate, and did well at the table (250+).
When I saw the team the second year I knocked down their technical scores and warned them against using the same robot again. They were a bit dismayed to see me judging again their third year. The score plummeted even lower.
This year my team competed against them for the first time. When the kids saw me in the pits they dragged me over to see their robot and the programs. It was a complete redesign, laden with sensors. Their coach told me that the team really struggled with the programming, but were very proud of their accomplishment. He didn't have to tell me because I could hear the excitement in their voices and see the pride in their faces. They managed to score 397, the highest score they ever achieved. Something they would never have achieved if they hadn't tried something new.
> Brian, you say "NOT ABOUT WINNING THE COMPETITION" then "Is it fun to win? Certainly.
> Will trying to win drive people to work hard
> & learn things? Absolutely."
>
>So, part of the competition certainly is winning.
Sure. But look again at what I wrote. I said that *FIRST & FLL* are not about winning the competition", and I think that's true. The *competition* may be about "winning"... but even there the metric may not be a simple scoring (as you point out with the Champions award). In short, it would seem much more attention is being paid to "winning the competition", than invested in the goals of why to have FLL in the first place (teaching kids innovation and accepting challenges).
> “If winning isn't everything, why do they
> keep score?” Vince Lombardi
That would seem like a poor justification for something like FIRST. I would argue the important parts of FIRST have nothing to do with the score... but as humans, we seem to put an *awful* lot of personal energy into "beating the opponent". That may be natural, that may be a great motivational factor to drive people towards other goals... but not if it starts becoming the goal in and of itself. That's when "cheating", for instance, suddenly becomes tempting ("winning is everything").
jonathan daudelin wrote:
> People don't usually start competitions
> just for the sake of having competitions
> - there's usually some goal that the
> compettion helps accomplish. :-)
I'm not sure sure of that. In perhaps the majority of competitions, the stated goal is nothing more than to determine "who wins". As examples, football, soccer, basketball, martial arts, chess, etc. Most of these "games" or "competitions" have become so ritualized that any other evaluation has become meaningless. FLL *could be* an exception to this... if people keep in mind the original goals.
LEGO events tend to be a little more than that sometimes, as often people go as much to learn, enjoy, and test, as to "win". But even there, winning ends up being a very important driving factor.
> However, the competition - the thing teams
> compete to surpass each other in - is not
> knowledge of the NXT.
However, the event is constructed to try to reward teams that *do* have an understanding of robotics. Otherwise non-robotic solutions would be allowed, using nothing but motors and battery boxes (or even wind-up... I once had a 500 gram sumo robot that was partially wind-up, for instance). I would say the rules and spirit of the event does, actually, try to encourage teams to use the NXT, programming, sensor, etc. And dead-reckoning solutions get around that many times.
Don't get me wrong, I'm always amazed by the innovation of mechanical solutions people come up with. A lot of FLL teams have some *amazing* innovation in their solutions, and it's something I really enjoy doing as well (I recently built a mechanical binary calculator... and not because of a contest, or even an informal challenge, & *certainly* not because it's efficient). But I don't believe it ("how to do dead reckoning better & better") is a primary goal of FLL.
--
Brian Davis
--
Brian Davis
WOuld some explain "odometry navigation" to me?
>But FLL decided to remove those
>sections and surround occurrances
>of the word "sensor" with the
>phrase "if used".
So there aren't any rules requiring teams to use other sensors, even implied, right?
Also, if a judge was judging by the rubrics (like he should be), he wouldn't have a reason to penalize a team soley for the fact that they only use rotation sensors, right? Note that I'm not talking about teams that only use rotation sensors and don't use them effectively and can't make their robot consistent with them.
>I guess sensors are "too hard"
>for children to use.
I don't think that's a reasonable conclusion... it would be like saying jigs are "too hard" for children to use just because teams aren't required to use them.
>When I saw the team the second
>year I knocked down their
>technical scores and warned them
>against using the same robot
>again.
I think it's too bad you penalized them for using their same robot two and three years in a row. First of all, the rubrics don't justify this - you can have a unique and creative design that you use more than once. Secondly, I don't think they should be penalized anyway for making a design so adaptable and universal that they could use it for three different challenges!
>They managed to score 397, the
>highest score they ever achieved.
>Something they would never have
>achieved if they hadn't tried
>something new.
Hey that's great... I'm not saying teams should never change. Of course, if a team is consistently getting perfect scores with a certain method, there isn't much scoring incentive to change.
-Jonathan
Dean - I've seen some very highly ranked teams use the same (or extremely similar) robots year after year. It seems unfortunately that not everyone shares your opinion (although I do). The rubrics as they are right now are still too subjective. I'd really like to see a technical rubric that is objective with check boxes for things completed successfully.
At the World Festival last year we were marked down because "solutions that were more simple were judged the same or higher" - this was from our technical judging sheet. We had a complicated and innovative solution to score 240 points in 24 seconds, but the judges did not like it because it wasn't super simple.
The rubrics and challenges should reflect what FLL is trying to teach. If interesting concepts are the goal, then reward teams that think outside of the box. If completing everything as simplistically as possible is the goal, then award those teams. The key is to make sure this is done in the rubric and not by subjectivity!
What does everyone else think?
Josh
>the event is constructed to try
>to reward teams that *do* have an
>understanding of robotics
Of course... the better understanding of robotics a team has, the better their robot will do, usually.
>Don't get me wrong, I'm always
>amazed by the innovation of
>mechanical solutions people come
>up with.
Absolutely agreed... and I think innovation is one of the aspects of the Technical judging. Of course, there's something innovative, in it's own way, about solving a seemingly complex challenge with such a simple design. :-)
Anonymous,
I'm not sure what people mean by dead reckoning, although they seem to be reffering to the use of just rotation sensors to navigate (not in a straight line necessarily, or with the robot being aimed by eyeballing). Odometry navigation is just a fancy term for using rotation sensors to navigate.
-Jonathan
P.S. Wow, these are some pretty long posts - deep discussion in process. :P
All snideness aside,coaches/judges bring different expertise and, despite our best attempts, biases. I'm a mechanical engineer, and I spend a great deal of time teaching about gears, linkages, etc. That influence shows--my teams often have mechanical solutions to problems. Problems that could also be solved programmatically.
Given equal reliability, is one approach inherently superior? No. Are different judges impressed by different things? Absolutely. Is there anything a team can do about it? Nope, it is the luck of the draw. They should take the approach they think is best, without trying to second-guess the judges' opinion, and be proud of what they have accomplished.
As far as a requirement for sensors goes, the simple fact that teams do well without them demonstrates that sensors are not needed. Indeed, the second-place performance award at our State tournament was a dead simple RCX-based robot that used only the timer for navigation. No other sensors were used. They were consistent with the top three scores of the day until, literally, that last match. IMHO, they are to be commended rather than derided. They looked at the game as a whole and realized complexity was not necessary and proved it. Bravo, that is good engineering. The fault of such a performance, if any, lies not with an ignorant and lazy team, but with FIRST. If FIRST wants to encourage complex robots that "bristle" with sensors, they should design the game accordingly. I would love to see missions that inherently required sensors (such as the bus stop in "No Limits"), but I'd hate to see the requirement appear in the rules. Likewise, I'd love to see just a flat-out hard mission included each year. Neither was present in the Power Puzzle game and teams should not be penalized for designing accordingly.
Regards,
Tom
- Joshua
I agree with this, but what was the point of FLL changing the name from Director's Award to Champion's Award? This makes it sounds as though the team that wins this award is THE champion, and none of the others are (at least, that's what it would sound like to newer teams) - I thought one of the FIRST points was that everyone was a winner.
Gotta jump into this discussion when I get the chance...
Richard
-Jonathan
I'm curious as to why you'd hate to see it in the rules. I've never been involved in FLL, this is my first year w/ a Junior FLL team. It is written in the rules that you must use a motor to make something move. This forces the younger kids to learn about motors and connecting them to their LEGO creations. For FLL, I'm not sure how I feel about sensors. Off the top of my head, I would agree that it would be nice to have challenges that would require sensors, but not have this written into the rule book. If a team came up w/ an innovative way to complete the challenge w/out using sensors, they shouldn't be penalized for their ingenuity.
The rubric also contain clauses that allow the judges to mark down for point and shoot type solutions, as they are not robust. The penalty is not applied if the team uses a clever jig or some other mechanism that minimizes the likelyhood of operator error.
Creative, unique, complete, innovative, efficient, accurate, logical, effective, assembles easily, stable, robust, modular (if modules are incorporated), modules added/removed easily, and elegant.
Josh
>their robot is not sturdy enough
>to survive a drop.
They shouldn't be, as the rubrics do not justify such penalties.
Notice I'm not talking about the base being stable and robust... that is a factor in the rubrics. I think it's a good factor, since being stable helps the robot be consistent in its performance. Having a robot that can withstand a drop from four feet, however, doesn't usually help the robot's performance in the FLL challenge, even if it shows that the team knows how to make really strong robots. :P
And Josh is right... lightweight and compactness are not factors in the juding rubrics.
-Jonathan
Thanks,
Tom
I'd hate to see a requirement for sensor in the rules for exactly the reason you mention: By defining how a mission is to be accomplished rather than the the desired outcome, creative thinking and innovation are limited.
For example, in "No Limits" the bus stop mission required teams to find a white flag that could be in one of three positions. http://www.firstlegoleague.org/default.aspx?pid=14190#ReadSigns
The obvious way to do this mission was to use a light sensor. But it was not required. Most teams took that approach but not all. Some had three programs for the three different positions while others would tap a touch sensor a certain number of times before the match to set a variable. Some just gambled.
Had the light sensor been required, teams would have been locked into a solution that was not necessarily the best.
An example of a mission that had a required method as well as a outcome is the sub from "Ocean Odyssey". http://www.firstlegoleague.org/default.aspx?pid=15910#DeploySubmarine While there is no doubt that defining the method made the mission difficult, it severely limited the range of solutions. Every team that attempted this mission was locked into the the same basic approach.
IMHO, the best mission since I've been involved was the flags in Ocean Odyssey. http://www.firstlegoleague.org/default.aspx?pid=15910#ConductTransectMapping
This mission was easy to get some points while exceedingly difficult to get all of them. The task lent itself well to the inclusion of both sensors and mechanical design. The creativity displayed by the teams with their different approaches remains one of the highlights of my FLL experience.
My 2 cents,
Tom
I agree the flags mission certainly was an excellent mission.
Josh
http://www.hightechkids.org/?2-1-http://www.hightechkids.org/?2-1-1054
Scroll down to the section for Officials Training Downloads.
There's a lot of other interesting information in that website as well. Minnesota appears to have a very mature FLL organization with good support and training for both coaches and officials. I wish that level of support were available for all teams.
Dean said:
Winning a qualifying or state tournament, or even the world festival, should not be how success is measured.
The robot performance is designed to attach a number to the robot's performance. FIRST is designed to teach kids about science and technology. If teams are not learning about science and technology through the table missions, there is a flaw in the performance part of the challenge. Kids are doing what they are told, getting high scores.... but somehow, they are supposed to be learning about science and technology at the same time. My team tries: we build a new robot each year, we try multiple solutions to the missions. In the end, I'm proud to say that I think members from our team know more about robotics and problem solving then a heck of a lot of other people.
But that's only because we work at it. Other teams are just doing what they're supposed to, there is no requirement to go the extra mile, how are they lazy if they do what they are told?
I wish all teams made learning about robotics their primary focus, but FIRST doesn't ask them to. Brian quoted their mission statement, but it contradicts the rest of the challenge.
This is probably why I like research more:)
http://flickr.com/photos/kris_kumar/2129963466/in/set-72157602292054399/
Here are the photos from the 2007 season:
http://flickr.com/photos/kris_kumar/sets/72157602292054399/
"I think it's too bad you penalized them for using their same robot two and three years in a row. First of all, the rubrics don't justify this - you can have a unique and creative design that you use more than once."
Before one discusses how to apply the criteria, one must fully define the criteria itself. I think most people do not actually use the proper definitions of these words. Indeed, FIRST itself may be guilty of this.
Unique - 1) being the only one 2)
a) being without a like or equal b) distinctively characteristic. 3) unusual
The definition of creative is, well, let's just say we'll probably have to use an interesting interpretation of the definition.
Creative -
1) marked by the ability or power to create 2)having the quality of something created rather than imitated 3) managed so as to get around legal or conventional limits
So, for unique we're looking for "home designed" robots. That is, something other than the ubiquitous tribot, tankbot, or other design found in the standard books and publications.
For "Creative", this is more difficult to apply. The second definition makes this look almost identical to "unique", but I'm sure most judges are applying definintion #3 - something that gets around conventional limits.
Given this definition, I would have found it hard to rate the Built on a Rock robot creative. A four wheel drive design is so similar to the tankbot from the RIS days built with wheels instead of treads that the main drive train is quite ho-hum.
But, assume that it was considered unique and creative its first year. Subsequent years I would have a hard time judging any robots that were reused to be unique or creative.
Teams in Atlanta last year were asked to document their robots so other teams could learn. I have not seen any of the top teams fully document their robots.
In fact, at one point someone asked some questions regarding the robot done by Built on the Rock and they refused to divulge it saying that they planned on using it again.
You can use a unique idea more than once... being unique doesn't mean the idea is only used once. But let's just assume it does for a minute. Why say that something only stops being unique after a year? We might as well also say that after the team uses a robot at their qualifying tournament, it isn't unique at the state tournament, right? :P
>Teams in Atlanta last year were
>asked to document their robots so
>other teams could learn.
The only thing I recall needing to do was send in pictures of our team and robot, and a description of our team dynamics and the like. If, by "fully documented", you mean we were supposed to do stuff like make building instructions of our robot or explain exactly how it works, I don't remember being supposed to do anything like that. Do you have a quote from any rules or something?
>In fact, at one point someone
>asked some questions regarding
>the robot done by Built on the
>Rock and they refused to divulge
>it saying that they planned on
>using it again.
This is a very vague statement... who's "they"? Who asked, and what did they ask? If someone had asked us for building instructions of our robot, then of course we would have declined. If they had asked for less specific details, we probably would have told them. For example, one team was interested in how our robot made such accurate turns, and we told them about our 4WD and rotation sensors. Also, I doubt the valididty of that statement since I don't remember us ever saying the team was going to use a similar robot for the next season until the season was over when I hinted at it on the LMSF.
That said, however, I don't think it's necessary for teams to tell other teams anything about their robot - they should be allowed to keep team secrets, just like sports teams generally don't tell opponent teams about their best plays. :-)
-Jonathan
I'm not sure why, but I have heard discussions that teams win the presentation during the 20 minute follow on interview, and the 5 minute presentation is often a poor indicator of the depth and breadth of the team's research and knowledge.
We've also discussed having the winning design and programming teams (we separate them) provide documentation so that less experienced teams can catch a glimpse of what kind of work is needed to win a tournament. I'm wondering what would be a good format. Anyone have suggestions?
INSciTE is the organization that runs FLL in Minnesota. Though I've done a lot of volunteer work for this fine organization (many others have done much more) I am not an employee nor any kind of official representative. Opinions expressed here are my own.
Teams are allocated 20 minutes for the presentation AND the interview. With setup and tear-down the interview is around 10 minutes instead of 20.
Lastly I would like to apologize for calling some FLL teams "lazy" or "ignorant". Though the comment generated a lot of good discussion I think some folks were hurt by it. That was not my intention. I have nothing but respect for anyone who gives of themselves to help advance FLL. To anyone hurt or offended by my comments, I'm very sorry.
We do occasionally miss some negative comments (with over 2 years worth of posts, it's very difficult sometimes to police every post and the comments for that post) but as a rule we try to encourage discussion that is positive and grows the community. We rely on our readers to let us know when they see something offensive or objectionable and we'll do our best to respond in the correct manner.
As 2008 starts, let's all remember that our community is a fun one - we play with robots!
Jim
A question on the research presentation in your state.
You said they win it during the 20 minute follow on interview and that the 5 minute presentation doesn't show their true understanding of the issues and problems. I think that's too bad, because even seeing a 5 minute presentation gives other teams the benefit of seeing presentation style, methods, tools used, etc that at least earned them that callback. Without seeing this, many teams may forever be stuck in the middle, trying to do better, but not understanding what they need to do to do better.
This is probably just as much fault with the rubric as with anything else. In school they say you should know the grade your're getting. I believe the same should be true here.
With this in mind, some teams last year stated that they had patents. Seven months is a ridiculously short period of time to be granted a patent. I would like to know exactly the timeframe they used for their research, just who did the research, and who did all the patent research. This isn't something that a bunch of kids gets together and does and, not to be too negative, but this smells of parent involvement.
INSciTE includes sample presentations from previous years in the video library. These include the judges interview after the presentation. We also have sample technical judging sessions so new teams know what to expect at a tournament.
Don't be too quick with accusations of parental involvement. Some of these kids work together year round and are very sharp. Last year a team at our high school robotics competition (FLL for old kids that can't let go) won the opportunity to work with a local nanotechnology firm to see if they could advance their research topic. The result of this was they were invited to present a paper at the Nanotech 2007 conference. Just the year before I was judging these girls at the Minnesota state FLL tournament.
As for the patents thing at the world festival, the story I heard from the Minnesota team that attendeded was that the team in question had applied for patents. While this is pretty cool, anyone can apply for a patent. Heck, being awarded a patent isn't all that unusual. A friend of mine at work has six. Even I have one. To me having a patent mostly means you were dumb enough to pay an attourney to protect intellectual property that nobody else is interested in. The plaque looks nice in my cube though.
If you deliver the message that kids can't really do much, and that you shouldn't expect much from them they may decide to lower their standards to meet your expectations.
Yes, anyone can apply for and be granted a patent, but it normally takes quite a bit of time, far longer than the 2-3 months most teams have before their state tournament and still far more than the 7 months they have from the challenge posting to the world festival.
To do the proper background research takes a professional in the field. This is something that no middle school student is going to be doing and it would be a very, very, very rare high school student that could attempt it as well.
While you and I both agree that patents don't mean a whole lot, they certainly do appear to have meant something to the judges in Atlanta last year. Just to hear the announcer say something like, "And team number 1234 was awarded (loudly) two patents for their work in xyz.
I don't deliver any expectation that kids can't do great things, however, if I were a judge on these panels, I would also ask a few questions about how the idea for a patent came up, what procedure was followed, what were some of the results/etc, just to see if the kids really did the work. I'm curious as to what the results would be with those answers in front of them.