Jump to content

Recaps are up on DCI.org


Recommended Posts

I found Cesario's discussion of how they do judge evaluations pretty interesting: http://www.marchingroundtable.com/2016/05/25/518/

Basically, self eval, peer eval, corps eval, Cesario/Phillips eval. He specifically talks about listening to the tapes of judges who give outlier numbers to see if they justify the numbers. Judges basically get used more or less or not at all based on the result of the evaluations.

He doesn't mention statistical analysis, and I'm not sure that it would really help. It's one thing to know that on Aug 2nd, Jane Judge gave a significantly bigger guard spread from Corps X to their closest competitor. It's another thing entirely to know whether Jane was correct to do so. Corps X can use the evaluation process if they think they are getting incorrectly evaluated by Jane.

Listening to tapes is one thing. And I agree that should be one component of a review.

Not mentioning a statistical analysis - or the thought that this isn't even done seems ridiculous to me.

Any person with an intermediate level on excel and the time and energy can crunch numbers like a pro.

Again, see my response above.

But, my gosh, for the integrity of the activity, DCI has to communicate to the fan-buying public that they have a sincere interest in ensuring the highest standards of adjudication. I believe, at the least, they lack communicating a clear message of their oversight of their judges.

Edited by drumcorpsfever
Link to comment
Share on other sites

Calling on Garfield - or some other numbers cruncher.

Here's what I'd like to see on the recaps.

A statistical analysis.

Caption highs and lows for each corps - season historical, by date

Caption rankings by corps - season historical, by date

Scoring, ranking by judge, plus/minus the average for each corps

Judge differential above/below a prescribed tolerance

What the analysis might show:

Wacky scoring outside of the norm

Bias

Inconsistency - or consistency in judging

Scoring patterns

I'm sure there's more that a numbers cruncher can pull out from the recaps.

I would hope this is the thing that DCI would do to evaluate their judges.

As I've said before, I think only the best, most experienced judges should ever get near Lucas Oil Stadium. I think it is a travesty that as fans we even have to consider who the judging panel is on any given night as to how that might affect the scoring.

you'd be better off asking DCP (who may or may not enter all the recaps) or frontensemble.com (same)

they maintain a database with every subcaption score

Link to comment
Share on other sites

Exactly as stated above by DCI-86...and do we know if the judge may have waited for the last corps to perform BEFORE writing in the 100 for BD?

I don't know how this process works but does the judge judge that corps, puts it in a packet and that is that?

Link to comment
Share on other sites

you'd be better off asking DCP (who may or may not enter all the recaps) or frontensemble.com (same)

they maintain a database with every subcaption score

You're probably right. As I stated above, it's more important that DCI does this analysis. Perhaps they do. But if they do, they're not telling. They may think it's the right approach, but if the fan-buying public and perhaps the corps themselves begin to lose faith in the integrity of adjudication, they may as well just perform in exhibition - or in front of mostly no one, or become stagnant in competitive growth. Edited by drumcorpsfever
Link to comment
Share on other sites

Calling on Garfield - or some other numbers cruncher.

Here's what I'd like to see on the recaps.

A statistical analysis.

Caption highs and lows for each corps - season historical, by date

Caption rankings by corps - season historical, by date

Scoring, ranking by judge, plus/minus the average for each corps

Judge differential above/below a prescribed tolerance

What the analysis might show:

Wacky scoring outside of the norm

Bias

Inconsistency - or consistency in judging

Scoring patterns

I'm sure there's more that a numbers cruncher can pull out from the recaps.

I would hope this is the thing that DCI would do to evaluate their judges.

As I've said before, I think only the best, most experienced judges should ever get near Lucas Oil Stadium. I think it is a travesty that as fans we even have to consider who the judging panel is on any given night as to how that might affect the scoring.

I think we have had that in the past and then there were people on here literally calling judges names for having that much experience and pretty much discriminating remarks toward them because they may not be under 50 yrs old. Well, you can't have both. I dont care how great someone is in their 20s and of course there are exceptions to the rule in any case BUT experience comes with time, mistakes, watching, learning constantly, discussion with the right people, openness to ideas within the judging community and corps staff's with ideas other than your own. Now as a young staff person many many years ago, I fought everything I said, as most youth do ( not a bad thing ) BUT you learn eventually, at least those with longevity in this activity, that there is much to learn. This also can and does mean back and forth. An older judge or staff person doesnt stay in the activity long if their eyes aren't totally open to others, which includes listening to a young person who might have something good to contribute.

Also it doesn't mean that some of the good ole boys don't need to go , just as some needed to go way back when. This is of course with all due respect. AGE should NEVER be a factor old or young. Contribution should ALWAYS be.

  • Like 1
Link to comment
Share on other sites

Listening to tapes is one thing. And I agree that should be one component of a review.

Not mentioning a statistical analysis - or the thought that this isn't even done seems ridiculous to me.

Any person with an intermediate level on excel and the time and energy can crunch numbers like a pro.

Again, see my response above.

But, my gosh, for the integrity of the activity, DCI has to communicate to the fan-buying public that they have a sincere interest in ensuring the highest standards of adjudication. I believe, at the least, they lack communicating a clear message of their oversight of their judges.

How do you propose this statistical analysis be done? What tests would be used? Is there enough data in a year to get reliable results that are not just by chance? How does DCI's spread based scoring system affect this analysis?

Link to comment
Share on other sites

How do you propose this statistical analysis be done? What tests would be used? Is there enough data in a year to get reliable results that are not just by chance? How does DCI's spread based scoring system affect this analysis?

Calling Nate Silver! Please make sense of this!

Edited by Jurassic Lancer
Link to comment
Share on other sites

How do you propose this statistical analysis be done? What tests would be used? Is there enough data in a year to get reliable results that are not just by chance? How does DCI's spread based scoring system affect this analysis?

I'm not a data geek, but I would suppose you can take all the numbers, do a trending analysis, determine the average trend, identify the outlying scores (say +/- .x) and go from there. You would expect to see some variation, but if you find a consistent +/- for a certain corps by a specific judge, then a little more digging may be required. Again, you'd be looking consistency. A corps' performance is going to vary day to day, but overall, the trending should be fairly consistent. I would think this should be part of DCI's evaluation process. I would take that, tape eval, feedback from the corps, etc. Build into a scorecard and do a full performance evaluation on each judge. I would hope DCI is already doing this or something similar.

But lastly I would communicate to all that selection of the panels for Indy is merit based. Just like they do in the NFL. It makes sense that it should be this way. Only the best should have the privilege to judge the greatest shows of the year.

Edited by drumcorpsfever
Link to comment
Share on other sites

I'm not a data geek, but I would suppose you can take all the numbers, do a trending analysis, determine the average trend, identify the outlying scores (say +/- .x) and go from there. You would expect to see some variation, but if you find a consistent +/- for a certain corps by a specific judge, then a little more digging may be required. Again, you'd be looking consistency. A corps' performance is going to vary day to day, but overall, the trending should be fairly consistent. I would think this should be part of DCI's evaluation process. I would take that, tape eval, feedback from the corps, etc. Build into a scorecard and do a full performance evaluation on each judge. I would hope DCI is already doing this or something similar.

But lastly I would communicate to all that selection of the panels for Indy is merit based. Just like they do in the NFL. It makes sense that it should be this way. Only the best should have the privilege to judge the greatest shows of the year.

You may want to listen to Michael Cesario on the 4 part judge evaluation system used by DCI.

http://www.marchingroundtable.com/2016/05/25/518/

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...