UPDATE: Matthew Ladner, director of policy research for the Foundation for Excellence in Education, responds. The takeaway: “There are very clear signs of aggregate level improvement in Florida, and also a large number of studies at the individual level showing positive results from individual policies.”
At the Shanker Blog researcher Matthew Di Carlo reviews the effectiveness of the suite of education policies often called the “Florida model.”
These ideas include assigning A through F grades to schools and school districts based in part on standardized test results, retaining low-performing third graders, expanding school choice, teacher evaluations and others.
Many of the policies were first implemented under former Gov. Jeb Bush, and he has exported them to other states.
DiCarlo emphasizes that the evidence so far allows for neither definitive nor broad conclusions:
That said, the available evidence on these policies, at least those for which some solid evidence exists, might be summarized as mixed but leaning toward modestly positive, with important (albeit common) caveats. A few of the reforms may have generated moderate but meaningful increases in test-based performance (with all the limitations that this implies) among the students and schools they affected. In a couple of other cases, there seems to have been little discernible impact on testing outcomes (and/or there is not yet sufficient basis to draw even highly tentative conclusions). It’s a good bet – or at least wishful thinking – that most of the evidence is still to come.
In the meantime, regardless of one’s opinion on whether the “Florida formula” is a success and/or should be exported to other states, the assertion that the reforms are responsible for the state’s increases in NAEP scores and FCAT proficiency rates during the late 1990s and 2000s not only violates basic principles of policy analysis, but it is also, at best, implausible. The reforms’ estimated effects, if any, tend to be quite small, and most of them are, by design, targeted at subgroups (e.g., the “lowest-performing” students and schools). Thus, even large impacts are no guarantee to show up at the aggregate statewide level (see the papers and reviews in the first footnote for more discussion).
It’s worth reading Di Carlo’s breakdown of the evidence in each section:
Research shows that F-rated schools respond to the pressure (and additional aid) and improved their performance, while school earning A through C grades show no significant differences in improvement.
If charter school affect student performance, either positively or negatively, the effect is small.
The threat of losing students to private schools or charter schools does seem to spur schools to improve programs, particularly for students with disabilities.
On other policies, such as retaining low-performing third graders, no conclusions can be drawn yet.