It’s Time to Get Specific About Value-Added Measures
Here’s a paragraph you never see in a story about value-added measures:
Because test scores make up only 20% of an evaluation, proponents say it is unlikely good teachers will lose their jobs. They cite a study that found a 0.1% chance that a teacher will be fired whose “true” ability is above the 50th percentile. The local union disputed those numbers, claiming that according to their analysis of the plan, about 1 in 75 above average teachers could lose their jobs.
Why does this kind of back and forth not exist? So much energy is spent debating the value and weights of various metrics, but nobody ever makes an attempt to quantify the consequences of those metrics. Why not tell people what you actually think will happen? Every econo-architect who builds value-added systems pays attention to test score variation models. It’s not hard to make an estimate. Sure, there would be a ghastly amount of variance and different interests would cook up their own numbers, but at this point anything is better than nothing. It’s insane that a debate about using test scores to evaluate teachers contains no quantitative discussion about how likely it is that the system will cause an undesirable dismissal.
Making a public effort to quantify the consequences of value-added measures is important because many of the dangers of test-based evaluation are a self-fulfilling prophecy. A key argument against value-added measures is that job security concerns will lead to detrimental changes in teacher behavior (e.g. teaching to the test). But that only makes sense if it’s reasonably likely a good teacher will be fired. If teachers believed in the accuracy of the evaluation system they would have little reason to change their behavior. Unfortunately, there’s no effort to inform teacher opinion on the issue.
I think Matt Di Carlo’s point about value-added measures being a symbol for firing teachers helps explain why the systems remain so opaque:
The intense debate surrounding value-added isn’t entirely – or perhaps even mostly – about value-added itself. Instead, for many people on both “sides” of this issue, it has become intertwined with – a kind of symbol of – firing teachers.
Supporters of these measures are extremely eager to use the estimates as a major criterion for dismissals, as many believe (unrealistically, in my view) that this will lead to very quick, drastic improvements in aggregate performance. Opponents, on the other hand, frequently assert (perhaps unfairly) that value-added represents an attempt to erect a scientific facade around the institutionalization of automatic dismissals that will end up being arbitrary and harmful. Both views (my descriptions of them are obviously generalizations) are less focused on the merits of the measures than on the connected but often severely conflated issue of how they’re going to be used.
If value-added is a symbol of all that is bad for teachers then the details don’t matter because reality will never turn it into something benign. It will always evolve to to represent whatever teachers feel is not benign. Value-added proponents have no need for quantitative details either. They fear real numbers will become a lightening rod for criticism.
Leaving numbers out of the discussion hurts all sides. Imagine teachers learned that if you were in the 50th percentile the odds of getting unfairly fired because of value-added were between 200 to 1 and 1000 to 1. Maybe that doesn’t sound so bad. It might be a bigger longshot than stringing together a bunch of poor performances during observations. Value-added proponents ought to welcome this kind of economical thinking.
Unions should also want to have actual numbers out there. Value added is happening, and if it’s happening, everybody involved should want it evaluated as objectively as possible. Because a priori judgments can become a self-fulfilling prophecy, an objective evaluation needs those judgments to be accurate. To make them accurate, we need numbers that give us a better idea of what to expect.