Jump to content

Marvelous Minneapolis Matchup


Recommended Posts

Crown - Talented corps. Odd show. Boring show. Loved the tag of the music from The Abyss.

Blue Devils - Talented corps. Odd show. Boring show. The field looked like a ring toss game. A few super awesome jazz chords.

Cadets - Talented corps. Couldn't help but think of '99 SCV and '93 Star. The guard looked like a container of brightly hued Tic-Tacs.

SCV- Talented corps. Awesome uniforms! The '89 Cadets still can't be surpassed in the Les Mis department. :thumbup:/>/>/>/>/>/>

Phantom Regiment - A field of lack luster Storm Troopers. :laugh:/>/>/>/>/>/> Very boring show.

Bluecoats - I like the uniforms. Disliked the show. I'm just into small town Americana themes. Not a fan of the bleachers on the field.

The Madison Scouts - Boring. For all this "reinventing" the Scouts brand; there is nothing exciting about it. They are doing the same schtick from years ago when Stewart was at the helm.

The Cavaliers - Another case of "oh how the mighty have fallen." The drumline is their lone bright spot. Boring show.

Boston Crusaders - I really, really wanted to like this show. The entire gray to red thing wasn't as exciting as everyone made it out to be. SCV's tunnel is still way more "Ooo and Ahh" inducing than what Boston did; and that was way back in '85 and '86. :thumbup:/>/>/>/>/>/>

Blue Knights - Boring. A return to body work for the sake of body work, silly corps proper skipping and flailing about, looking like the early '90s Glassmen. The voice effects overwhelmed the music.

Spirit - Should have been beat by the Blue Stars. A yawner of a show. For a corps that's trying to include a touch of the old Spirit, they are missing the boat. If they really want to grab the crowd while playing something classic; they should play thier '83 opener "Brothers of Bop." I'd welcome that back in a heartbeat from Spirit!

Blue Stars - 99.9% awesome! Best uniforms! Great show; classic in styling and, yet, fresh and new. Very exciting. The flags need a lot of attention. The gap between them and the corps below them has widened. They should and deserve to be ahead of Spirit and very, very close to the Boston Crusaders.

Crossmen - This show reminded me of thier '06 production. It's a jumbled mish mash of tunes. I'm all for a great variety of different music; but the theme just isn't putting them all together. Jazz it up, next year, Crossmen!

Troopers - Welcome back! I would have liked to see them with the navy blue jacket they were considering; pretty cool, uniform, however. This show brought back memories of the '80s Troopers.

...I didn't see the rest of the corps.

You " didn't see the rest of the Corps " ? Well look at the positive in that. Its 8 less Corps you had to write " boring " in your review, and 8 less times we had to read the word " boring " in your boring review. :sleeping:

Edited by BRASSO
  • Like 6
Link to comment
Share on other sites

You know, I started this season convinced CC would beat BD and the Cadets would beat SCV. But after these Minneapolis performances I'm no longer sure. I'm growing less enamored with the second half of Crown's show than I was originally... I'm not entirely sure why, though. And I'm increasingly enamored with BD's show overall as the minor tweaks make the designers' intentions clearer over the course of the season.

It's more of a toss-up for me now. That is, unless BD seriously doesn't change the last minute and a half of their show... and assuming that BD's brassline gets its act together.

That's funny. I'm exactly the opposite. I was impressed by BD's show early on but, now, I'm simply infatuated with Crown's show.

Watch what happens to the closer, there's a crown in there I'm sure. And musically, after the ballad, I've not witnessed a more driving show to the cutoff since '87 SCV.

I'm going to be a basket case watching this in Indy.

  • Like 1
Link to comment
Share on other sites

skywhopper, on 14 July 2013 - 08:18 AM, said:

Guys, the specific numbers are meaningless. They're affected by the show order, the number of corps, the judges' familiarity with the shows, and so much more. You can't compare scores between any two shows, and even spreads are meaningless, because judges have to leave room for themselves to fit later corps in between earlier ones. So if the third to last corps that goes on does well, and the second-to-last corps does just barely better, you still have to give a two-tenths bump so that the last corps can fit in between if necessary. Multiply across 10 judges and there's a lot of numbers management going on.

You'll notice in almost every case that BD came in 2nd to Crown in a subcaption that they are 0.1 below and whoever got 3rd was 0.2 below Crown. BD went on last but with so many corps and a maximum reasonable score to give out, the judges put Crown 0.2 ahead of whomever the previous top scorer was and then BD came on and they slid them into the gap they'd left. There's no room for these judges to be assigning objective numbers. It's just not possible.

I probably don't understand scoring as well as you but, still, your post doesn't seem to make sense, Sky.

First, you start with not comparing scores between shows, but then you divert to judges scoring one show by leaving room to slot later finishers.

And is the correct word "subjective" and not objective?

Next, the characterizing of highest score (presumably for the night) as "maximum reasonable" leaves lots of ambiguity and, reasonably, subjectivity.

Lastly, your concept, therefor, would suggest that judges come into the show with some preconceived notion of the "reasonable" high and low scores for this point in the season, and simply divides that range among corps they slot that night.

Isn't your analysis, if true, actually a damning condemnation of our "scoring" system? If judges are truly just plugging in numbers to justify their slotting subjective opinion that night, why all the bother with captions and boxes at all?

I'm going to try to say this as bluntly as possible, but first I want to reiterate that while I do judge fairly often at the local/regional level, I do not judge at the National level of any band or winter percussion circuit, let alone at the DCI level.

That being said, the gist of this wordy post is: judges follow the logical ranking/rating of groups based on rubric on sheets + performance on the field of ALL groups any given night. However, there are other factors that come into play as well. It's easy to sit on the couch/desk/whatever and criticize perceived inconsistencies: heck, I do it as well! But knowing all of the factors that go into judging it's easier to understand everything involved with ranking/rating groups at any given show.

What you are trying to do, Garfield, is approach adjudicating at the upmost logical level using rubrics and a sort of mathematical estimate of "this performance level = this precise score, while this design level = this precise score." And for sure, that is PART of what goes into judging. The front of the sheets has an abbreviated list of descriptors that go into each side of the Caption, while the back of the sheet has a scale describing not only all of the factors that go into the sub-caption, but how well a unit must perform (or design) in order to achieve a Box threshold.

For example, let's look at Percussion (that's one I can remember best off the top of my head, and being that's what I often judge I'm more comfortable talking about that caption)

** The Performance side of the sheet (or maybe it's called Achievement now) will have the descriptors on the front that say something like "Clarity," "Balance/Blend," Uniformity," "Expression/Musicality," "Precision." The Design side of the sheet will have the descriptors "Depth of orchestration/vocab," "Range of skills," "Simultaneous Responsibility," "Range of everything asked of members together (technical challenge, simultaneous demand, etc)."

** The back of the sheets will be split in two, with one side being "Rep" and the other side being "Performance/Achievement." It will contain the different qualifiers for each Box:

1) Rarely 2) Infrequently 3) Sometimes 4) Often 5) Always, with each box having a range of possible points (Box 5 would be, say, 90-100; Box 4 would be 70-89, etc).

So the rubric defines the ranges that a group will possibly score based on their design & performance any given show. The standards don't change, as the sheets are the same from the first show in June to Finals night.

The part that you are seemingly disregarding, Garfield, are plentiful:

* judges are human beings, with their own experience, eyes/ears, preferences, etc. In my region, for example, a percussion judge could be a snare drummer from Blue Devils who makes 85% of his comments about the battery, and only gives the front ensemble a cursory sampling. Or conversely, you could get a percussion judge who performed in SCV's front ensemble, who will make the majority of front ensemble comments while giving the battery comments on the obvious stuff. Or you could get someone who marched snare drum but also went to music school, who gives a fairly well-balanced tape. All three judges have great experience, were trained by the association, and might very well make good tapes, but they also have their strengths & weaknesses, their specialties, etc. All judges are human beings, so it will not be uncommon for different people to "see" different things and rate a group differently from night to night.

* performances change night-night. Groups improve, design elements are added, design elements become more recognizable, new design elements don't work, a corps has a bad night, a section has a bad night, a person has a bad night (i.e, say, a snare drummer). A venue can mess with a unit if it has a weird echo, if the sun is in their face, if a field is lined funky. Maybe the corps arrived to their housing site at, like, 10am and had a bad rehearsal day and then a bad run. There are SO MANY unknown factors when it comes to performance: especially earlier in the season when staff is still experimenting with what works, members are still learning the show, etc.

* show dynamic changes. If there are only six WC corps at a show, there is a LOT more room for numbers. Especially when we have a diverse lineup such as: Pioneer & Cascades (two corps near the competitive bottom); Academy & Colts (two corps closer to the middle); Blue Knights & Scouts (two Top 10ish Finalists). With that sort of show dynamic, it's easier to say Colts have a wide spread in front of Cascades, but are closer to Blue Knights. With 20+ groups, especially this time of year, that becomes very difficult. You have to fit 20 groups into a range of maybe 15 total points. In the Percussion Caption, you can put maybe a 4 point spread from first to last in smallish local show, where as at a 20+ corps Regional you have maybe a 4.5 spread to fit EVERYONE. That totally changes things for numbers management, especially since most judges don't like to tie subs unless it is either blatantly obvious they are equal, or it is literally impossible to rank two units a part. This is the likely reason for the "West Coast Bias" thoughts, because there are a few number of WC corps every night, and when a judge goes from Pacific Crest to SCV back-to-back, it's easy to say SCV is def. a few points better (in a caption), which bumps up the top numbers of night. When you get to an all-inclusive Regional, there is a lot less wiggle-room.

Also, for Regionals it's imperative that a judge "gets it right," since placements affect performance order at future regionals. That makes ranking & rating even more important, where as smaller shows you not only have more wiggle room, but in the grand scheme of the season it is not as important as Regionals.

That might sound callous, or unfortunate, but it is what it is. As a judge, you try your dammedest to catch everything (good & bad), reward based on the sheets vs personal preference (my personal creed), make good tapes that explicitly credit & critique (instead of vaguely), etc. Just like everything in life, you aim to be the best but inevitably stuff happens. At big shows early in the year, it's not uncommon to have a few corps really underachieving, a few corps really overachieving, and then a whole lot of corps in the middle where it can often be very difficult to make the call (but that's why they get the big bucks).

Sorry for the long response, and I'm cutting it short now because I'm currently distracted by a dozen things going on right now. I hope that makes some sense for you.

  • Like 7
Link to comment
Share on other sites

I'm going to try to say this as bluntly as possible, but first I want to reiterate that while I do judge fairly often at the local/regional level, I do not judge at the National level of any band or winter percussion circuit, let alone at the DCI level.

That being said, the gist of this wordy post is: judges follow the logical ranking/rating of groups based on rubric on sheets + performance on the field of ALL groups any given night. However, there are other factors that come into play as well. It's easy to sit on the couch/desk/whatever and criticize perceived inconsistencies: heck, I do it as well! But knowing all of the factors that go into judging it's easier to understand everything involved with ranking/rating groups at any given show.

What you are trying to do, Garfield, is approach adjudicating at the upmost logical level using rubrics and a sort of mathematical estimate of "this performance level = this precise score, while this design level = this precise score." And for sure, that is PART of what goes into judging. The front of the sheets has an abbreviated list of descriptors that go into each side of the Caption, while the back of the sheet has a scale describing not only all of the factors that go into the sub-caption, but how well a unit must perform (or design) in order to achieve a Box threshold.

For example, let's look at Percussion (that's one I can remember best off the top of my head, and being that's what I often judge I'm more comfortable talking about that caption)

** The Performance side of the sheet (or maybe it's called Achievement now) will have the descriptors on the front that say something like "Clarity," "Balance/Blend," Uniformity," "Expression/Musicality," "Precision." The Design side of the sheet will have the descriptors "Depth of orchestration/vocab," "Range of skills," "Simultaneous Responsibility," "Range of everything asked of members together (technical challenge, simultaneous demand, etc)."

** The back of the sheets will be split in two, with one side being "Rep" and the other side being "Performance/Achievement." It will contain the different qualifiers for each Box:

1) Rarely 2) Infrequently 3) Sometimes 4) Often 5) Always, with each box having a range of possible points (Box 5 would be, say, 90-100; Box 4 would be 70-89, etc).

So the rubric defines the ranges that a group will possibly score based on their design & performance any given show. The standards don't change, as the sheets are the same from the first show in June to Finals night.

The part that you are seemingly disregarding, Garfield, are plentiful:

* judges are human beings, with their own experience, eyes/ears, preferences, etc. In my region, for example, a percussion judge could be a snare drummer from Blue Devils who makes 85% of his comments about the battery, and only gives the front ensemble a cursory sampling. Or conversely, you could get a percussion judge who performed in SCV's front ensemble, who will make the majority of front ensemble comments while giving the battery comments on the obvious stuff. Or you could get someone who marched snare drum but also went to music school, who gives a fairly well-balanced tape. All three judges have great experience, were trained by the association, and might very well make good tapes, but they also have their strengths & weaknesses, their specialties, etc. All judges are human beings, so it will not be uncommon for different people to "see" different things and rate a group differently from night to night.

* performances change night-night. Groups improve, design elements are added, design elements become more recognizable, new design elements don't work, a corps has a bad night, a section has a bad night, a person has a bad night (i.e, say, a snare drummer). A venue can mess with a unit if it has a weird echo, if the sun is in their face, if a field is lined funky. Maybe the corps arrived to their housing site at, like, 10am and had a bad rehearsal day and then a bad run. There are SO MANY unknown factors when it comes to performance: especially earlier in the season when staff is still experimenting with what works, members are still learning the show, etc.

* show dynamic changes. If there are only six WC corps at a show, there is a LOT more room for numbers. Especially when we have a diverse lineup such as: Pioneer & Cascades (two corps near the competitive bottom); Academy & Colts (two corps closer to the middle); Blue Knights & Scouts (two Top 10ish Finalists). With that sort of show dynamic, it's easier to say Colts have a wide spread in front of Cascades, but are closer to Blue Knights. With 20+ groups, especially this time of year, that becomes very difficult. You have to fit 20 groups into a range of maybe 15 total points. In the Percussion Caption, you can put maybe a 4 point spread from first to last in smallish local show, where as at a 20+ corps Regional you have maybe a 4.5 spread to fit EVERYONE. That totally changes things for numbers management, especially since most judges don't like to tie subs unless it is either blatantly obvious they are equal, or it is literally impossible to rank two units a part. This is the likely reason for the "West Coast Bias" thoughts, because there are a few number of WC corps every night, and when a judge goes from Pacific Crest to SCV back-to-back, it's easy to say SCV is def. a few points better (in a caption), which bumps up the top numbers of night. When you get to an all-inclusive Regional, there is a lot less wiggle-room.

Also, for Regionals it's imperative that a judge "gets it right," since placements affect performance order at future regionals. That makes ranking & rating even more important, where as smaller shows you not only have more wiggle room, but in the grand scheme of the season it is not as important as Regionals.

That might sound callous, or unfortunate, but it is what it is. As a judge, you try your dammedest to catch everything (good & bad), reward based on the sheets vs personal preference (my personal creed), make good tapes that explicitly credit & critique (instead of vaguely), etc. Just like everything in life, you aim to be the best but inevitably stuff happens. At big shows early in the year, it's not uncommon to have a few corps really underachieving, a few corps really overachieving, and then a whole lot of corps in the middle where it can often be very difficult to make the call (but that's why they get the big bucks).

Sorry for the long response, and I'm cutting it short now because I'm currently distracted by a dozen things going on right now. I hope that makes some sense for you.

Probably one of the cleanest explanations I've heard. Thanks.

Still, I'm troubled by the context of "fitting" x-number of corps into a tight band of quality rankings. Simplistically, if a judge has a 20-point spread from high to low score, why not just separate all placement by one point instead of scoring in tenths, or hundredths? I get the concept and understand better the human impact thanks to your post, but...maybe I'm being too logical.

After all, it's subjective art, not logic, right?

Link to comment
Share on other sites

Probably one of the cleanest explanations I've heard. Thanks.

(respectful snip)

Wow, I agree! I'd give it an 8.358 in design, and an 9.265 in execution.

That way, Mr. Garfield's explanation score has room to grow by finals. cool.gif

Edited by wvu80
Link to comment
Share on other sites

Are you kidding me? That was a 9.264 at best.

You're probably right, but I had to leave a little room until I saw what Mr. Perc2100 had to say.

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...