Common Core goes postmodern

A quick post today.

Mike Petrilli tweeted about the new IES standards center (of which I am a part) at Jay Greene and Rick Hess, asking them if they might be convinced of CCSS effectiveness by the results of such a study. To be clear, the study design is the same as several previously published analyses of the impact of NCLB, which are published in top journals and are widely cited. We are simply using CITS designs to look at the causal impact of CCSS adoption and then exploring the possible mediating factor of state implementation.

Jay respondedNo. Low N and choosing and implementing CC are endogenous.

Rick agreed: “Nah, the methodology on the link isn’t compelling- which fuels my skepticism. As Jay said: low n, endogeneity. Ugh.”

I’m fine with the attitudes expressed here, so long as they are taken to their logical conclusion, which is that we cannot ever know the impact of Common Core adoption or implementation (in which case why are we still talking about it?). I don’t see how, if the best-designed empirical research can’t be trusted, that we can ever hope to know whether Common Core has had any impacts at all. So if Jay and Rick believe that, by all means.

I suspect, however, that Jay and Rick don’t believe that. For starters, they’ve routinely amplified work that has at least as serious methodological problems as our yet-to-be-conducted work. In that case, however, the findings (standards don’t matter much) happened to agree with their priors.

Furthermore, both have written repeatedly about the negative impacts of Common Core. For instance, Common Core implementation causes opt out. Common core implementation is causing a retreat on standards and accountability. Common Core implementation is causing restricted options for parents. Common Core implementation is causing the crumbling of teacher evaluation reform. [1] How can we know any of these things are caused by Common Core if even the best-designed causal research can’t be trusted?

The answer is we can’t. So Rick and Jay (and others who have made up their minds that a policy doesn’t work before it has even been evaluated) should take a step back, let research run its course, and then decide if their snap judgments were right. Or, they should conclude that no research on this topic can produce credible causal estimates, in which case they should stop talking about it. I’ll end with a response from Matt Barnum, which I think says everything I just said, but in thousands fewer characters:

So are people (finally) acknowledging that their position on CC is non-falsifiable?”

Apparently.


[1] Note: I believe at least some of these claims may be true. But that’s not hypocritical, because I’m not pretending to believe there is no truth with regard to the impact of Common Core.

Advertisements

Gathering textbook adoption data part 2 (or: even when it should be easy, it ain’t)

A few posts ago I wrote about the challenges in getting a study of school textbook adoptions off the ground. Suffice it to say there are many. This post continues in that thread (half of it is just me complaining, the other half is pointing out the absurdity of the whole thing, so if you feel like you’ve already got the idea, by all means skip this one).

California is one state that makes textbook adoption data public, bucking the national trend. This is a result of the Eliezer Williams, et al., vs. State of California, et al. case, where a group of San Francisco students sued the state, convincing the court that state agencies had failed to provide public school students with equal access to instructional materials, safe and decent school facilities, and qualified teachers. As a consequence of this court case, California schools are required to provide information on these issues in an annual School Accountability Report Card (SARC). The SARCs are publicly available and can be downloaded here.

This is great! Statewide, school-level textbook data right there for the taking.

Well, not so fast. For starters, with the exception of about 15% of schools that use a standard form, the rest turn in their SARCs in PDF form. And the state doesn’t keep a database on those 85% (obviously). So the only way to get the information off the SARCs is to do it manually, by copying from PDFs [1]. Thus, over the course of the last year, with the support of an anonymous funder, we’ve been pulling these data together.

As it turns out, even when you can pull the data from the SARCs, there are at least three major problems:

  • Large numbers of schools that simply don’t have a SARC in a given year. Apparently there must be some kind of exemption, because this would seem to violate the court ruling otherwise.
  • For schools those that do have a SARC:
    • Textbook data that are missing completely.
      • Or that are missing key elements, such as the grades in which they are used and the years of adoption.
    • Listed textbook titles that are so vague (e.g., “Algebra 1”, when the state adopts multiple books with that title) or unclear (e.g., “McGraw Hill”, when that company publishes numerous textbooks) as to be somewhat useless.

As a consequence, like in the NSF study, we’ll be reaching out to all districts with non-complete data via email or phone to fill in the gaps.

Of course the data will never be perfect, and they’re better than is available anywhere else. But if the purpose of the court ruling is to provide some measure of public accountability through the clear reporting of this kind of information, it’s not clear to me that the SARCs are currently fulfilling that role. Perhaps the state doesn’t care to or doesn’t have the manpower to enforce the ruling. That’s unfortunate, not because it makes this research more challenging, but because it deprives disadvantaged students of the remedy that the court has decided they are due.


[1] For reference, there are around 1000 districts and 10000 schools in California.

Monday Morning Alignment Critiques

As I’ve written about already, one of my main research interests these days is the quality and alignment of textbooks to standards. My recent work on this issue is among the first peer-reviewed studies (if not the first) to employ a widely-used alignment technique to rate the alignment of textbooks with standards. While I think the approach I use is great (or else I wouldn’t do it), it’s certainly not perfect. There are many ways to determine alignment; all of them are flawed.

Of course, there are others in this space as well. The two biggest players, by far, are Bill Schmidt and EdReports [1]. Both are well funded and have released ratings of textbook alignment. EdReports’ ratings have recently come under fire from many directions, including both publishers and, now, the National Council of Teachers of Mathematics. NCTM released a pretty scathing open letter, which was covered by Liana Heitin over at EdWeek, accusing EdReports of errors and methodological flaws.

I have three general comments about this response by NCTM.

The first is that there is no one right way to do an alignment analysis. While the EdReports “gateway” approach might not have been the method I’d have chosen, it seems to me to be a perfectly reasonable way to constrain the (very arduous) task of reading and rating a huge pile of textbooks. Perhaps they’d have gotten somewhat different results with a different method; who knows? But their results are generally in line with mine and Bill’s, so I doubt highly that their overall finding of mediocre alignment is driven by the method.

The second is that we need to always consider the other options when we’re evaluating criticisms like this. What kind of alignment information is out there currently? Basically you’ve got my piddly study of 7 books, Bill’s larger database, and EdReports [2]. Otherwise you have to either trust what the publisher says or come up with your own ratings. In that context, it’s not clear to me that EdReports is any worse than what else is available. And EdReports is almost certainly better than districts doing their own home-cooked analyses. The more information the better, I say.

The third point, and by far the most important, is that this kind of criticism is really not helpful in a time when schools and districts are desperate for quality information about curriculum materials. Schools and districts have been making decisions about these materials for years with virtually no information. Now we finally have some information (imperfect though it may be) and we’re nit-picking the methodological details? This completely misses the forest for the trees. If NCTM wants to be a leader here, they should be out in front on this issue offering their own evaluations to schools and districts. Otherwise it’s left to folks like EdReports or me to do what we can to fill this yawning gap by providing information that was needed years ago. Monday morning alignment critiques aren’t helpful. Actually getting in the game and giving educators information–that’d be a useful contribution.


[1] For the record, I participated in the webinar where EdReports’ results were released, but I have not been paid by them and don’t currently do any work with them.

[2] There’s probably other stuff out there I don’t know about.

Gathering textbook adoption data (or: shouldn’t this be easier?)

Suppose you set out to study the impact of textbooks on teacher practice and student learning. The only way to begin such a study would be to pull together data on which textbooks were used in which schools.

You’d think this would be easy to do. After all, we live in a data-driven culture, and you can find just about any bit of information about your local school via a few seconds on Google (or the state department of ed website).

Well, you’d be wrong.

As I mentioned last post, I have a couple grants to study textbook adoptions. These grants are concentrated in the five largest US states by population (CA, TX, NY, FL, IL). Of these, only Florida keeps track of textbook adoptions at the district level. The other four states, comprising roughly 4,000 school districts, do not keep track at all [1].

This means that if you want to know which textbooks are being used in these 4,000 districts, you have to ask people. As far as I know, there’s no other way to do it. So that’s what we’ve done. We created a beautiful website where district personnel can go to report their textbooks. Then we gathered contact information for district personnel in all these districts and sent them a series of emails inviting them to participate and offering a chance at a $500 incentive to do so.

Suffice it to say the response rate was not what we hoped, even after several rounds. So we’re moving on to round two. We’re sending state-specific open-records request to every non-responsive school district in these states, pointing them to the website. And a couple of weeks after these requests arrive, a horde of USC undergraduate researchers will begin sending personalized emails and making phone calls to districts. Essentially, we hope to hammer all 3,000-ish non-California districts in our sample into submission.

I’m telling you all this not because it’s especially interesting (I probably should have picked a better topic for my early posts on the blog) but because it shows the absolutely absurd lengths one needs to go to in order to gather what should be a freely available, extremely basic piece of information about schools.

Of course my hope is that my projects are successful and that I can gather this information on almost all districts. But if I can’t, I at least hope I can convince some people that this is a piece of information we should be tracking. It costs essentially nothing to do, it does not endanger privacy in any way, and it’s very useful from both a research and an equity point of view.


[1] California actually does keep track to a certain extent; I’ll talk about the Golden State in a future post.

A textbook example of education research

One of my main research interests these days is the adoption and use of textbooks and other curriculum materials. Why would I possibly care about textbooks? Well, for starters, they’re incredibly cheap relative to other educational interventions, and they can have remarkably large causal effects (PDF) on student achievement. They also are just a skosh less politically treacherous than, say, radically altering teacher tenure policies.

This work began with a grant from an anonymous foundation to analyze the alignment of textbooks to the Common Core math standards. That investigation found overall weak alignment, with some common areas of misalignment across books (notably, they were excessively procedural relative to what’s in the standards) [1].

While that work was informative, it didn’t tell me much about who was using which textbooks, how, and to what effect. As a new set of standards rolls out, I’m guessing that curriculum materials may matter more than ever. So I set out to investigate these issues in a few different studies. The basic gist of this set of studies is to understand:

  • Which textbooks are being adopted in the core academic subjects in light of new standards?
  • What explains school and district textbook choices (qualitatively and quantitatively)?
  • How do teachers make use of textbooks in their teaching?
  • What are the impacts of textbook choices on student outcomes?

This work is funded by the National Science Foundation, the WT Grant Foundation (with co-PI Thad Domina), and by another anonymous foundation (with co-PI Cory Koedel).

In the coming days and months I’ll be talking quite a bit about this work and some of the lessons learned so far. The next post is going to highlight some of the things I’m learning as I’m trying to go through the (seemingly straightforward) task of simply gathering data on what textbooks schools and districts are using these days. Spoiler alert: it ain’t pretty.


[1] That work also identified some ways to make the process of analyzing textbooks (which turns out to be incredibly time- and labor-intensive) much simpler.