Asking whether poverty is included in the Florida formula to evaluate teachers is posing the wrong question, according to Matthew DiCarlo at The Shanker Blog.
DiCarlo is referring to a recent story where some educators questioned the lack of a poverty factor in the state teacher evaluation formula. State officials argue poverty is irrelevant because the formula measures student improvement over a period of years.
DiCarlo notes the formula can accurately account for poverty indirectly, by using factors that serve as proxies for poverty. But what’s more important is making sure the formula accounts for all factors outside a teacher’s control.
So, the answer to this more central question – whether a growth model accounts for non-teacher factors – is inherently a matter of degree. When using properly-interpreted estimates from the best models with multiple years of data (not the case in many places using these estimates for decisions), it’s fair to say that a large proportion of the non-teacher-based variation in performance can be accounted for.
There will, however, always be bias, sometimes substantial, affecting many individual teachers, as there would be with any performance measurement. Whether or not the bias is “tolerable” depends on one’s point of view, as well as how the estimates are used (the latter is especially important among cautious value-added supporters like myself). Furthermore, as I’ve argued many times, the bigger problem in many cases, one that can be partially addressed but is being largely ignored, is random error.
But that’s a separate discussion. For now, the main point is that the controversy over the role of “poverty” in education has assumed a role of unqualified importance in the debate over value-added. It’s more broad and complicated than that. Reducing it to a poverty argument is likely to be unproductive in the short and long run. It oversimplifies the potential problem of systematic bias, and also ends up ignoring the critical issues – implementation, random error, model specification, data quality, etc. – that can make all the difference.
Any opinions on how well this formula accounts for things outside a teacher’s hands?