Hacker News new | past | comments | ask | show | jobs | submit login

How about giving the test on a computer and having it automatically change the difficulty level on the fly? (Obviously this may be a lot easier for math than for other subjects.) Test every grade level using the same program, just with different entry points based on the knowledge they are presumed to have learned at that point.

Right now it sounds like they are awarding the teachers based on the percentile level of their students compared to other students in the same class level, which is pretty much guaranteed to be useless noise if your students all start at the 98th percentile. It would be much more meaningful to be able to say, "Each of your students clearly knows more algebra than he did last year."

The other issue is how to motivate the children to actually want to do well on the test. Somehow rewarding a kid who does better than he did the previous time he took the test seems like a good first step...




All valid points, and there are testing suites that do exactly what you say that are widely used (this one is pretty standard: http://www.nwea.org/products-services/computer-based-adaptiv...), but they aren't the assessment that's measured by state departments. There are probably lots of reasons for this, (cynical: they were outlobbied by Pearson, which has million dollar contracts with many states to write their state tests.), but nonetheless they exist.

The other problem is that this is a problem at which we should be throwing our best data scientists, and in many cases we're lucky if those involved have an intro stats course under their belt. Evaluating student performance is no trivial task (ESPECIALLY when trying to create some accurate measure of reading level), but we're not putting the brainpower behind it to figure out. And as much as I love it, I don't think Khan Academy will be solving those problems, but I'd love to be wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: