Jump to content

Question on scores and judging


Recommended Posts

I'm reminded of a certain year where a certain corp's front ensemble stood on top of their respective keyboards for the final hit of their show and crashed their hearts out with their cymbals! It made a certain percussion judge put them in last for quarter-finals for just being appalled by the "disgraceful" way they were treating their instruments. A different judge had semi's and finals, and gave them a score in the top five bracket in percussion for semi's. Unfortunately, that different judge was told by his mentor - who just so happened to be the same certain percussion judge from quarter-finals - that he cannot give a drumline such a high score when treating instruments so badly. Come finals - the different percussion judge dropped them drasticly from his semi's to finals placement.

All this talk about judging just reminded me of this "CERTAIN" story. Sorry if I'm not with the subject of the post completely, but just had to share the story. I couldn't resist.

Or, like, when CERTAIN fans or CERTAIN members from a CERTAIN corps like to talk sour grapes because they are CERTAINLY not as good as they'd like to be, and tend to grasp at CERTAIN excuses to try to overcompensate for their CERTAIN shortcomings?

Link to comment
Share on other sites

When did this happen? Your can keep the corps nameless if you like. I'm siding with the judges that put them down if they did in fact stand on their instruments. There's got to be some technique penalty for standing on your stuff. And that judge that put them up in the semis needed to be realigned before finals.

The whole story sounds fishy to me though since recently judges are not repeated from night to night. especially not semi's to finals.

Ther has GOT to be a joke in there somewhere... :shutup:

Link to comment
Share on other sites

There's another aspect that we seldom discuss. That is how good programming highlights good performance (and perhaps hides flaws).

Cavies are among the best at this, designing their programs so that your eyes naturally focus where the program is peaking. Cadets 09 was an unfortunate example of the opposite; excellence in marching and playing was offset by an overly busy program which didn't give adequate opportunity to maximize appreciation.

This isn't just an audience issue. It's a judging issue too. The judges can't see everywhere. Like the audience, they focus sequentially. The program that maximizes appreciation deserves reward.

This is among the reasons why I'm okay with programming factoring into the judging. Not only is it fundamental to our enjoyment of the activity, it's highly preferable to the alternative. I don't want to see every corps marching the same show. Or worse, I don't want to reduce the shows to a series of common exercises in order to get competitive equity.

HH

I mostly agree. however I think too often design carries too much weight

Link to comment
Share on other sites

design drives the score... even if you execute perfectly, you can't get a 10 in the execution subcaption unless your "content" is 9.7+, IMO.

Is it really that unreasonable to think that a corps would sometimes put out a 7.5ish content wise but perform at a 9? If someone knows of any scores like this please feel free to post them.

the general rule of thumb is .5-1.0 distance when the top box ( i.e. demand) is stronger, and no more than .5 if the performer is out performing the book

Link to comment
Share on other sites

people act like the judges are some sort of separate entity from dci.

they're not.

dci = the drum corps themselves. therefore, corps hire the judges and explicitly tell them how to judge with rubrics they, themselves, articulate and agree upon.

Link to comment
Share on other sites

people act like the judges are some sort of separate entity from dci.

they're not.

dci = the drum corps themselves. therefore, corps hire the judges and explicitly tell them how to judge with rubrics they, themselves, articulate and agree upon.

This is true to an extent, but I think it distorts reality a bit: kind of like the (false) rationality that the people elect their government officials so therefor government officials make laws based on what the people want. In theory, this both statements are true, but in practice, not so much.

The judging rubrics, in regards to design and performance standards, are CONSTANTLY changing: sometimes even from one week to the next. As staff members argue their points on quality of design and/or performance, judges alter their perception on what is and isn't effective and what level of execution clarity is at the upper level. While judging rubrics and sheet criteria/verbiage are voted on by the DCI BoD, and while there is a broad standard when it comes to judging effectiveness and clarity, how each judge interprets that criteria and applies it to what is being performed on a field can often radically differ than staff or directors interpretation. Implying that judges do what they're told to do by corps directors might be a little disingenuous (or at least possibly misinterpreted by some).

Link to comment
Share on other sites

This is true to an extent, but I think it distorts reality a bit: kind of like the (false) rationality that the people elect their government officials so therefor government officials make laws based on what the people want. In theory, this both statements are true, but in practice, not so much.

The judging rubrics, in regards to design and performance standards, are CONSTANTLY changing: sometimes even from one week to the next. As staff members argue their points on quality of design and/or performance, judges alter their perception on what is and isn't effective and what level of execution clarity is at the upper level. While judging rubrics and sheet criteria/verbiage are voted on by the DCI BoD, and while there is a broad standard when it comes to judging effectiveness and clarity, how each judge interprets that criteria and applies it to what is being performed on a field can often radically differ than staff or directors interpretation. Implying that judges do what they're told to do by corps directors might be a little disingenuous (or at least possibly misinterpreted by some).

the member corps, themselves, determine the judging criteria and hire the judges.

Link to comment
Share on other sites

So the general consensus here is that design drives the score, but how much of that actually figures into the overall score? If corps A had a show that was an 8 in difficulty (out of 10) and Corps B had a show that was a 10 in difficulty and Corps A performed slightly better than Corps B, but Corps B performed extremely well for that difficult of a program, would the judges reward Corps B more with the benefit of the doubt and give a higher score?

Or would Corps A come out on top because they performed better?

Edited by DCISuperfan
Link to comment
Share on other sites

You rarely if ever see a corps score perfect in any caption regardless of how well they perform because judges don't want to max out their sheet. You never know what the next corps coming on is going to do.

Link to comment
Share on other sites

Just a small question on how exactly corps are judged - Lets say hypothetically, that this year's top 12 corps played their programs completely and utterly perfectly. Every rifle was caught, every move was in time, every page of drill was perfect, and every note was played completely in tune and with the most balanced sound and with the perfect intonation... ect ect.

Well, that would have to be hypothetical....because there is no "perfect". As amazing as performances get, there are always differences in performance quality that can be assessed. Ask any judge.

Once upon a time, performance judging was based on the "tick" system, where judges would make a "tick" mark on their sheet for each error they observed, and deduct one tenth of a point from their caption's maximum for each such error. The system hinged upon what tolerance level the judge applied to decide which imperfections were minor enough to be overlooked, and which were major enough to be ticked. If a corps only received a few "ticks", that did not mean they were truly that close to perfect.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...