Jump to content

Judging system/score sheets


Recommended Posts

GE: scores are added together for a possible total of 40. When a double panel of four judges is used the final score is half the total. (max 80/2=40)

Isn't it an average of the first two added to the average of the second two? Or am I reading it wrong?

Example:

Genesis, at San Antonio...

GE 1 was an average of Cazpinski (12.7) and Sybilski (12.8) = 12.75

GE 2 was an average of Davis (13.6) and Jones (12.9) = 13.25

12.75 + 13.25 = 26.00 total general effect score

Link to comment
Share on other sites

I thought that Open class was going to be judged on the same sheets and the same scale as WC? There is no way the top open class (not even BDB or SCVC) are scoring 75 at this time. rant over.

I'd chalk that up to show dynamics. If they had been in San Antonio they'd likely have received lower scores.

Link to comment
Share on other sites

So, I've read through this thread and have concluded that (using the collective anecdotal sample provided from those that have responded thus far):

1. Few of us seem to have any real knowledge of how this system actually works... starting with myself, or minimally, we do a horrible job of articulating how it works.

2. The fact that it continually calls for "tweaking" every few years (with generally the same outcomes when all is said and done come August) speaks to its credibility... or lack thereof. (Certainly, many will defend this "tweaking phenomena" as a derivative of a "progressive artistic genre" that "requires" adjustment as the activity continues to "evolve." Yeah, right.)

3. There should be considerable personal reflection when we find ourselves falling into deep discussion or debate on the concept of corps being "better or worse" than other corps on any given night, based upon the subjective nature of all of this, coupled with the fact that few of us can explain (let alone defend) exactly how we assess these corps, and how valid those assessments are.

4. In my opinion, all of this seems to have come about because we're little more than a niche activity of inbreds, and there's always weird things that happen when family members start mixing it up a bit too much, with little influence or benefit from the gene pool of a larger population.

In other words, it's tough to see ourselves objectively when our corps culture is rooted in recruiting and influencing uber-talented young people (only the best) with creative, artistic minds that come here already idolizing (probably to a fault) incredible designers, instructors, and organizational leaders, and ultimately becoming instructors, designers, organizational leaders... and adjudicators themselves.

End result, we have this somewhat convoluted system that few can explain with confidence in comprehension. It's easier to simply follow historical competitive patterns and trends to justify outcomes, and occasionally mix it up with a new comer every now and then to break the monotony. Before you know it, you end up with the "haves and have-nots of drum corps" existing in a circuit that appears somewhat dysfunctional when assessed historically by any objective standard.

Everything in drum corps is about the "achievement of excellence," it's what we do. Therefore, our assessment tools must likewise be excellent... right? After all, WE created it, WE internally evaluate it and each other, and WE defend the outcomes, year after year, because we're the best of the best, we're "marching music's major league." Thus, we assume (or accept) that the people creating and using these sheets are simply smarter than the rest of us, and we keep those individuals well compensated and idolized for what they do. After all, they sure do "sound smart" when they talk about it, don't they? (sarcasm off)

To the OP inquiry, I'm curious about your project and how this scoring sheet information will be used, if you're willing to share.

  • Like 1
Link to comment
Share on other sites

Manchester scores are a travesty. Unless Spartans and 7th Regiment are the 14th best corps and Legends are ahead of Mandarins, the scores are wrong. Lack of leadership in the judging administration time to change. Unfortunately all those corps will now have to take backwards steps at the next show they do.

  • Like 1
Link to comment
Share on other sites

Isn't it an average of the first two added to the average of the second two? Or am I reading it wrong?

Example:

Genesis, at San Antonio...

GE 1 was an average of Cazpinski (12.7) and Sybilski (12.8) = 12.75

GE 2 was an average of Davis (13.6) and Jones (12.9) = 13.25

12.75 + 13.25 = 26.00 total general effect score

youre right. Ge1 added then divided by 2, same for GE 2

Link to comment
Share on other sites

So, I've read through this thread and have concluded that (using the collective anecdotal sample provided from those that have responded thus far):

1. Few of us seem to have any real knowledge of how this system actually works... starting with myself, or minimally, we do a horrible job of articulating how it works.

2. The fact that it continually calls for "tweaking" every few years (with generally the same outcomes when all is said and done come August) speaks to its credibility... or lack thereof. (Certainly, many will defend this "tweaking phenomena" as a derivative of a "progressive artistic genre" that "requires" adjustment as the activity continues to "evolve." Yeah, right.)

3. There should be considerable personal reflection when we find ourselves falling into deep discussion or debate on the concept of corps being "better or worse" than other corps on any given night, based upon the subjective nature of all of this, coupled with the fact that few of us can explain (let alone defend) exactly how we assess these corps, and how valid those assessments are.

4. In my opinion, all of this seems to have come about because we're little more than a niche activity of inbreds, and there's always weird things that happen when family members start mixing it up a bit too much, with little influence or benefit from the gene pool of a larger population.

In other words, it's tough to see ourselves objectively when our corps culture is rooted in recruiting and influencing uber-talented young people (only the best) with creative, artistic minds that come here already idolizing (probably to a fault) incredible designers, instructors, and organizational leaders, and ultimately becoming instructors, designers, organizational leaders... and adjudicators themselves.

End result, we have this somewhat convoluted system that few can explain with confidence in comprehension. It's easier to simply follow historical competitive patterns and trends to justify outcomes, and occasionally mix it up with a new comer every now and then to break the monotony. Before you know it, you end up with the "haves and have-nots of drum corps" existing in a circuit that appears somewhat dysfunctional when assessed historically by any objective standard.

Everything in drum corps is about the "achievement of excellence," it's what we do. Therefore, our assessment tools must likewise be excellent... right? After all, WE created it, WE internally evaluate it and each other, and WE defend the outcomes, year after year, because we're the best of the best, we're "marching music's major league." Thus, we assume (or accept) that the people creating and using these sheets are simply smarter than the rest of us, and we keep those individuals well compensated and idolized for what they do. After all, they sure do "sound smart" when they talk about it, don't they? (sarcasm off)

To the OP inquiry, I'm curious about your project and how this scoring sheet information will be used, if you're willing to share.

actually several of us on here can explain it well. we're just ignored because it's easier to ##### when (insert corps here) doesnt get the score fans feel they deserve

Link to comment
Share on other sites

Isn't it an average of the first two added to the average of the second two? Or am I reading it wrong?

Example:

Genesis, at San Antonio...

GE 1 was an average of Cazpinski (12.7) and Sybilski (12.8) = 12.75

GE 2 was an average of Davis (13.6) and Jones (12.9) = 13.25

12.75 + 13.25 = 26.00 total general effect score

If you add them up....

12.7 + 12.8 + 13.6 + 12.9 = 52.0

Divide by 2 and you get....26.0

Then again...there is this:

https://www.youtube.com/watch?v=xkbQDEXJy2k

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...