Hacker News new | past | comments | ask | show | jobs | submit login

Gary Rubinstein wrote an excellent series of posts debunking the value added metrics http://garyrubinstein.teachforus.org/2012/02/26/analyzing-re...



I don't think that's a debunking. yummyfajitas covered the plotting tool, but that's only the start.

Clearly, the metric is far from perfect -- you can tell that without his analysis. Year-over-year variation is high, indicating that there's likely a lot of noise in the measure. That said, year-to-year scores do correlate. This means that the measure is at least a little bit reliable in the sense that if you measure the same thing twice you'll get an answer that's close to the original.

All he's shown is that these are "noisy". There's no reason not to use a noisy metric, you just need to know that's what you've got, and make responsible decisions.

For example, don't fire teachers who rank poorly, but do review their performance. In that case, we'd find that the "Worst Teacher" is underperforming on this metric because it wasn't designed to test great students, and we can move on. Another example of the same phenomena in education is looking for teachers who help students cheat exams. When there are suspicious correlations found, administrators investigate.

One last thing: What he doesn't investigate -- and this is much more important -- is the (lack of a) correlation between value added ratings and other metrics (say, peer evaluations).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: