Last week NYC Chancellor Joel Klein proposed publishing individual teachers' value-added data, L.A. Times-style.
Current teacher evaluation systems do not take student achievement into account. That's a given. Value-added models (VAM), which attempt to measure how much of an effect individual teachers have on students' test scores are potentially very useful as part of a re-worked teacher evaluation system. These data could be useful for promotion, dismissal and other reward decisions. But there are some items that should be discussed more often up front:
First, to calculate a VAM score, a teacher has to teach a class to which a standardized test is attached. In Georgia, as in many states, that means that a VAM could only touch somewhere around 30 percent of teachers, even in theory. Other teachers are always going to have to be evaluated some other way.
Second, there is a lot of potential error in the calculation of VAM scores — this is not a false argument. But there is not so much error that it makes the scores meaningless; it's just important to define what VAM scores really show and what they don't.
Third, to say that VAM is a bad idea because teacher evaluations "shouldn't be based on a single test score" really is a false argument — no one in Georgia is seriously proposing that. Combining VAM with principal observations and other forms of peer review would greatly enhance the way we think about and measure teachers' results, and would represent both a more fair and more objective method at the same time. (It should be noted that this is exactly the type of teacher evaluation system proposed in Georgia's Race to the Top plan).
And finally, to say that classes differ from year to year or that teachers of gifted students have an advantage also misses the mark. VAM concepts vary, but the best use individual sets of students as their own controls; every teacher is measured based on how well their particular set of students perform, based on their particular starting point.
Regarding New York's idea of publicly posting individual teachers' results: I really don't see the practical point. VAM/re-worked teacher evaluations could be used for promotion and retention decisions, and are a big improvement over the current system; why publish names too? That move would add a lot of risk to teachers' jobs, but not enough of a reward ot balance that out. Raising the stakes like this for individuals would necessitate ever greater test security. Unless one wants to somehow take the idea of school choice all the way to the level of the individual teacher, the costs greatly outweight the benefits of whatever information publishing one thinks publishing those results might gain us.