Over the weekend, the Network for Public Education put out a brief published by Julian Vasquez Heilig and others titled “Should Louisiana and the RSD receive accolades for being last and nearly last?” The release of this brief was presumably timed to coincide with the 10-year anniversary of Hurricane Katrina, a horrific disaster that killed many and destroyed much of New Orleans. This report comes on the heels of several other NOLA-related publications, such as:
- Doug Harris’s brief on the achievement gains post-Katrina, which uses longitudinal statewide student-level data and (to my knowledge) the most advanced quasi-experimental techniques one could use given existing data limitations. That report concludes “We are not aware of any other districts that have made such large improvements in such a short time.”
- CREDO’s urban charters report that uses sophisticated matching methods that, while not uncontroversial in some quarters, are at least a very reasonable attempt to solve the inherent problems in charter/traditional public comparisons (you can read a defense of their methods here. I wouldn’t say I’m a strong enough methodologist to really adjudicate these issues, but it’s worth noting that Dr. Vasquez Heilig has, in the past, promoted findings from CREDO as producing causal estimates when it was convenient for him to do so). That report concludes that charters in New Orleans outperform students in traditional public schools by about one-tenth of a standard deviation.
The new brief is notable because it finds, across multiple measures, that Louisiana/New Orleans/The RSD are woefully underperforming. Here’s their conclusion, for example:
In summary, the NAEP scores have risen in reading and math, but Louisiana’s ranking relative to the nation has remained the same in math and dropped one spot in reading. The new NAEP research in this brief shows that Louisiana charter schools perform worse than any other state when compared to traditional schools. This finding is highly problematic for the conventional narrative of charter success in Louisiana and the RSD. Also, the RSD dropout, push out, and graduation rates are of concern— placing last and nearly last in the state. After ten years of education reform it is a disappointment that only 5% of RSD students score high enough on AP tests to get credit. The review of data also demonstrates that neither the Louisiana ACT nor RSD ACT scores are positive evidence of success.
In conclusion, the national comparative data suggest that there is a dearth of evidence supporting a decade of test-score-driven, state-takeover, charter-conversion model as being implemented in New Orleans. The predominance of the data suggest that the top-down, privately controlled education reforms imposed on New Orleans have failed. The state and RSD place last and nearly last in national and federal data. These results do not deserve accolades.
Now, I am not an expert on New Orleans, nor do I have a particular horse in this race. I want what’s best for the kids of New Orleans. And I want research that helps us decide what’s working for New Orleans’ kids and what’s not. Unfortunately, I don’t think the NPE brief helps us in that regard. In fact, the brief provides no evidence whatsoever about the effects of New Orleans’ reforms (and certainly less than is provided by the ERA and CREDO studies (and others)).  I’m not going to do a full-scale critique here, but I will point out a few ways in which the report is fatally flawed.
- Probably the most obvious issue is that the NAEP data that the authors use are not suited to an analysis of NOLA’s performance. New Orleans is NOT one of the districts that participates in NAEP’s Trial Urban District Assessment, so the statewide results from Louisiana tell us nothing at all about New Orleans’ performance. The Louisiana charter sample is not necessarily from New Orleans, anyway (as the authors point out, 30% of LA’s charters are outside New Orleans). This alone would be fatal for an analysis seeking to understand the effectiveness of NOLA reforms. And it would be puzzling to conduct such a study, given the obvious data limitations and the evidence we already have from studies that have access to superior data. But setting it aside …
- The design of the study is not appropriate for any investigation that seeks to have high internal validity (that is, for us to trust the authors’ conclusions about cause and effect). If we were to take the authors’ analysis as implying cause and effect, that would be logically equivalent to simply ranking the states on their 2013-2003 gains and recommending whatever education policies were in place in the top-gaining states. As it happens, other folks (including our very own Secretary of Education) have done that already, arguing that Tennessee’s and DC’s policies were the ones we should be adopting. And they were wrong to do that, too. There’s even a term for this kind of highly questionable use of NAEP scores to make policy recommendations–misNAEPery. To be sure, there are appropriate uses of NAEP data to attempt to answer cause and effect questions (e.g., here and here). But these use much more sophisticated econometric techniques and go through dozens of falsification exercises, which doesn’t appear to have been done here.
- The authors seem to use only data from a couple time points, for some reason ignoring all the other years of data that exist. Given all of the things that happened, both in Louisiana and in other states, between 2003 and 2013, it is inappropriate to simply take the difference in scores and attribute it to the impact of “reform.”
- The authors need to create strong comparison groups from observational data, and they resort to regression-based approaches. These would only create a fair charter/public comparison if the models adequately controlled for all observed and unobserved factors that contribute to the charter enrollment decision and affect the outcome. The very limited number of statistical controls in their model (e.g., only racial-ethnic composition and poverty at the school level) are almost certainly not up to the task. Observational charter studies using limited controls simply do not produce results that are consistent with studies using more advanced methods.
- The authors seem to ignore the other literature on this topic, such as the two studies cited above. At a minimum, they could offer justification for why their methods should be preferred over ERA’s and CREDO’s methods (perhaps they did not offer such a justification because none exists).
There are many other critiques, and I’m sure others will make them. The main conclusion is that this report provides literally zero information about the causal effect of the New Orleans reforms on student outcomes, and should therefore be ignored. There are lots of hard questions about New Orleans reforms, and we need good evidence on them. Even folks who are far from reform-friendly agree that this brief provides no such evidence.
EDIT: If you want to see Dr. Vasquez Heilig’s response, in which he essentially acknowledges his report has zero internal validity, check it out here.
 That this kind of work would be promoted by groups that claim to “[use] research to inform education policy discussions & promote democratic deliberation” is a shame, and a topic for another conversation.
One thought on “A brief post on a disappointing brief”
[…] am I talking about the NAEP TUDA anyways? Because one critic briefly liked the idea. That data wasn’t used in the charter vs. traditional school comparisons or […]