FOIA information

Textbooks seem to have suddenly become a hot topic again, with the recent release of a study of NYC schools’ textbook habits by Charles Sahm. Robert Pondiscio has a nice summary of Sahm’s important work. He also ends with a mention of my textbook FOIA work and some quotes from me about how silly it is that we don’t track these data routinely.

In that spirit, I thought it might be of interest if I gave a little update on where things stand with my research. Perhaps this explanation will explain my somewhat diminished presence here and on Twitter in the past two weeks.

I sent out 3,014 FOIA requests a bit before Memorial Day, and the responses started coming back on Tuesday, May 26. Since then, I’ve received:

  • Textbook adoption data reported on http://www.nsftextbookstudy.org for approximately 650 districts.
  • Email or snail mail responses for approximately 850 districts (some overlap with the above, but not much).

Of these 850 email/snail mail responses, I’d estimate about 10% say they don’t keep textbook data, about 50-60% provide the data, and about 30-40% say they need more time to collect it.

And of those who provide the data, it’s relatively clear that some substantial proportion of them–perhaps half–do not routinely keep a list of the data, but rather they pulled something together for me (by law, by the way, they do not have to do this, so I am quite appreciative). Some of these pulled-together data include handwritten lists in cursive.

Five districts have so far charged me for the information (ranging from $1.19 to $27), which they are legally entitled to do if it took them time to pull the documents together or if they made copies.

One school originally demanded that I come pick the data up in Rochester, New York. However, after a somewhat testy email from me, the principal finally offered to email me a PDF for $2.25.

Two district leaders have sent threatening emails or left nasty messages indicating they would rat me out to my Dean (for what, I don’t know). I told me Dean and she said no big deal.

Perhaps 10 respondents have expressed great interest in the research or sent thoughtful notes.

It’s very clear that New York districts track these data less routinely than Illinois or Texas districts. However, my sense is that the response rate is quite a bit higher in New York than in the other two districts, so perhaps it is just that the districts that don’t track the data in Illinois and Texas are ignoring me, whereas in New York they’re telling me they don’t track it.

So that’s where we are at this point. We have data of some kind from ~40% of districts in these four states. I haven’t really gotten beyond that to look at what they’re actually reporting yet. We’ll start with follow-up emails and letters to the nonrespondents in a few weeks.

All in all, I’ve been amazed at how unbelievably effective this has been as a research strategy, even if I feel somewhat bad for having had to deploy it. I’m very excited for the data gathered to this point, and I think it will be useful both to me and to other researchers moving forward.

Tests are the worst! Or the best! No, the worst!

A new Quinnipiac poll is out today. As always, I think it’s best to take these polls not as single data points in favor of one particular position, but rather as part of a broad sea of often contradictory, incoherent evidence about what/whether the public thinks about education.

There are some interesting nuggets in here, and again fodder for both “sides” of current education reform debates. The teachers’ unions and their supporters will love that the poll finds voters support the teachers’ unions’ policies over Governor Cuomo’s by a substantial margin (note that Cuomo’s overall favorability rating is net positive, so the lack of support for his education policies is particularly strong; that said, I wonder how much people understand what his education policies even are). The reformsters will love that a majority thinks the number of charter schools in the state should be expanded. Nothing new here; support for charters in polls is almost always net positive.

What’s most interesting to me, though is a series of questions about standardized testing. To me, these questions make painfully apparent the utter lack of coherence (or, to put it much more charitably, the nuance) in the public’s views of testing. First, we have the question “Do you think teacher pay should or should not be based on how well their students perform on standardized tests?” The results here are a resounding NO, 69/28. Similar results for whether standardized tests should be used for teacher tenure. [1]

Then we have the question “How much should these tests count in a teacher evaluation: 100%, 75%, 50%, 25%, or not at all?” Now, you would imagine these results would be mostly “not at all,” since the very previous two questions folks said the results shouldn’t be used for pay or tenure. Nope! In fact, 49% of people say these tests should count 50% or more in teacher evaluation, and another 27% say 25%. Just one-fifth of respondents–21%–say not at all. Hardly an anti-test bunch, these voters.

And finally, we have the question “Do you think standardized tests are or are not an accurate way to measure how well students are learning?” At this point I guess you’d have to think that voters would say yes, since in the immediately preceding question 77% said these tests should count for teacher evaluation. But you’d be wrong again! 64% said that standardized tests were NOT an accurate way to measure how well students are learning.

So, tests are not an accurate way to measure student learning, but they should definitely count at least a quarter in teacher evaluations, but they shouldn’t count at all in tenure or pay decisions. Got it. Suffice it to say this is yet another example showing why it’s immensely problematic when people pick a single data point from one poll and use it in support of their existing position.

[1] Jacob Mishook, on Twitter, notes that these wordings could be construed to imply 100% reliance on standardized tests for these decisions, which is a fair point that might explain at least part of the very negative response.

What Twitter does for me

In the past 4-ish years, I’ve put out something like 26,000 tweets. That’s about 3.6 million characters typed (ok, not really, since a good chunk of those are retweets). This works out to a few dozen a day, every day. I’d say on a typical weekday if you add it all up I spend perhaps 30 minutes on Twitter. Sometimes it’s quite a bit more–I don’t want to know what award shows or election returns were like before Twitter–and sometimes it’s nothing at all.

Some folks, like Jay Greene, think Twitter is mostly “about the dumbest thing on the planet,” at least when it comes to having policy discussion. There is some truth to this, and in general I rarely get in substantive discussions on there, particularly with the army of trolls who pepper your every tweet with gotchas and asinine rhetorical questions. And I do worry sometimes that my presence on social media eclipses my status as a serious scholar–I had something of an existential crisis about this at the most recent AEFP conference when I felt like I was constantly being introduced as “the guy on Twitter.”

But there is no doubt in my mind that my presence on social media has dramatically enhanced my career in multiple ways, and I would advise any doctoral student who plans to go into education policy that they should take it seriously in that regard. Here are a few ways in which Twitter has helped me:

  • It can be immensely powerful for actually getting research done. The most obvious example of this is my FOIA work, which came entirely from a Twitter conversation with Jason Becker, someone I met on Twitter but would likely never have otherwise known. If not for Jason’s idea, I’d probably be stuck at a 5% response rate, pulling my hair out. Now I’ve gone from 3% to about 40% in the span of six business days.
    • Another excellent example came just today, when a charter school in upstate New York said they’d be happy to give me my FOIA information if I’d only come to the school to pick it up. I tweeted out a plea for assistance, it was retweeted by several researcher friends, and, miraculously, someone who works two minutes from the school volunteered to go and pick it up for me. [1]
  • It keeps me much more informed about policy than I otherwise would be. I actually don’t know how folks who are not on social media keep track of all the going on in state and federal policy. I guess EdWeek and Politico digests? I would be lost.
  • It keeps me much more informed about research than I otherwise would be. While of course I subscribe to the usual panoply of journal TOCs, there’s always the new working paper or policy brief that doesn’t find its way into my inbox. Easily several times a month I’m downloading and reading publications I get through Twitter that I would not otherwise have found.
  • It gets my name in front of people folks who would otherwise not see it. Mostly these are DC-based think-tank types, but it’s also journalists and a few researchers. It’s quite clear to me that I am dramatically better known than I ought to be given my relative inexperience. This leads to more citations (which actually do count for tenure) and more invitations to write and do important service (e.g., editing journal special issues). Probably some of these opportunities I would have gotten if I hadn’t been on Twitter, but certainly not all.
  • It’s just plain fun. All day, every day, I’m engaged in a back-and-forth with thousands of smart, experienced people. We exchange information and ideas. We joke about personal and professional issues. We engage in social movements and lighthearted memes. It’s a nice diversion from what is otherwise an often isolating, individualistic occupation.

Are there downsides to Twitter? Sure, maybe a few folks take me less seriously because of my engagement there. Or maybe I lose out on a tiny bit of productivity (though I’m actually certain that I’m MORE productive because of Twitter, not less, but whatever). It does require some effort to get anything out of it.

But it’s a slam dunk in my mind that my career would be considerably worse without Twitter, and I suspect that this would be true for virtually any young academic. So I’ll keep on tweeting and spreading the gospel of Twitter. And I’ll have fun doing it.


[1] I will do my best to pay this forward, universe.

What I’ll tell textbook publishers tomorrow

Today I’m heading out to Washington to speak at Content in Context, the annual conference of the Association of American Publishers PreK-12 Learning Group. This is actually my second time being invited to speak in front of this group of textbook publishers in the last year and a half or so. I’m on a panel about Common Core (natch), and I’ll probably be mostly talking about my alignment work and perhaps some of the newer data collection activities I’ve been describing ad nauseam on this blog and on Twitter.

I really appreciate that the publishers would want to have me at their meeting. After all, my research on alignment to standards has painted the publishers in a fairly negative light. And I was quoted in an article where another academic called publishers “snake oil salesmen” for their over-stated alignment claims [1]. Certainly they could have tagged me as another anti-corporate academic with an agenda and cast me off. Instead, they’ve continued to engage me in multiple ways, and I am quite confident that I am the better for it.

People often ask me if I think we’ll be better off in a world without textbooks (often one second after they sneer at the very existence of textbooks or the notion that anyone would consider them worthy of policy research). The answer to that question is a pretty resounding “no.” While I suspect some teachers could do just fine creating their own curricula on their own or in teams, it strikes me as completely bananas to have 3 million teachers making their own materials. I doubt that current teacher education programs adequately prepare teachers for that kind of work, but even if they did that level of decentralization wouldn’t make much sense to me. Furthermore, with that little standardization there would be virtually no hope of learning what’s working and what’s not. Quite the contrary, I wish there were fewer varieties of textbook out there, each with more research and evidence as to its quality (including a voluntary national option–which Engage might end up being) [2].

The traditional publishers are up against some pretty grave challenges right now, and I’m sure they know it. Aside from the research questioning their quality and alignment, the forces of the internet and freely available materials like Engage NY are making it harder and harder for strapped districts and schools to justify the (admittedly not so expensive in the scheme of things) traditional textbook.

So what I’ll tell the publishers on Wednesday is that I really want them to rise to this challenge–to put out materials that embody new standards and show schools and teachers why it’s worth it to use either text or digital materials from a traditional publisher. They won’t be able to compete on cost (versus materials that cost $0). But they could certainly compete (and likely exceed) on quality. So that’s where they have to place their bets. I hope they succeed.


[1] I believe I’ve made this clear either here or elsewhere, but I do not believe publishers are snake oil salesmen. I haven’t met a textbook publisher I thought wasn’t trying hard to put out a good product, even if I sometimes think the end result falls short of that goal.

[2] In a future post I’ll talk about what we’re learning about the absurd variety of textbooks used in California schools–far more variety than I expected or I think is probably useful.

Chris Christie and Common Core

A little alliteration for your Thursday evening.

The big news today was NJ Governor Chris Christie announcing that he’s going to pull the state out of Common Core (but apparently keep the state in PARCC, for now?). This is a relatively big deal, as Common Core opponents have generally been extremely unsuccessful to date in getting states to repeal the standards despite tepid popular support and huge partisan gaps. NJ.com has a nice summary of Christie’s “evolution” on the issue. A few thoughts about this announcement.

1) There are basically three options here. First, Christie was lying when he said he supported the standards initially. Second, Christie is lying now and is pandering to the base because he’s desperate to do something to boost his presidential hopes. Or third, Christie was telling the truth at both time points and has actually evolved to oppose the Common Core due to implementation problems in the state. If you follow me on Twitter, you can guess which of these I think is most likely. It’s certainly possible he’s evolved on the issue, though I don’t see anything in New Jersey’s implementation that’s particularly bad in the national scheme of things (someone correct me if I’m wrong).

2) Chris Christie has no chance of being the Republican nominee for president. None. Zero. For that matter, neither does the other most prominent Common Core turncoat, Bobby Jindal. Christie’s way too liberal, and he’s pissed off the Republican base too many times. This move doesn’t change that, though I view this (as Paul Bruno does) as basically a Hail Mary shot.

3) I suspect the outcome of this will be a set of standards that looks an awful lot like the Common Core, as it has been in the other states that have gone through this song and dance. So it’s a lot of sound and fury that, in the end, probably signifies nothing.

4) I’m sure some educators in New Jersey are thrilled right now. But moves like this simply emphasize the (widely held and largely true) belief among educators that, when it comes to policy “this too shall pass.” Certainly in the short term moves like that might result in new and different/better standards. But in the long term, the more teachers invest in a policy only to get jerked around, the more they’ll ignore policy altogether. And that’s not good for anyone (at least anyone who believes policy might play a role in improving outcomes for kids).

A FOIA status update

So I’ve committed what must be the first cardinal sin of starting a blog. That is, I started this, posted a few things, and then vanished for a week. There are two reasons for my absence over the weekend.

First, I was on vacation [1].

Second my FOIA strategy with regard to the textbook study has been more successful than I could have imagined, and I’ve been completely buried just trying to stay afloat. Indeed, yesterday was the first time I can recall in my life when I had 100 unread emails.

When Jason Becker suggested this idea, I was more than a little skeptical. Nevertheless, I had 3,014 FOIA letters printed and mailed out on Friday. They started arriving over the weekend, and I’ve already obtained more than 8 times as many responses in two days as I had in the previous three months (both on the website and via email).

One interesting thing I’ve noticed is that there are several types of responses to the request:

1) Send over a list right away, no questions asked. These districts either must have had a list handy or have had the ability to cobble something together pretty quickly. This is about 30% of respondents so far.

2) Ask for an extension on the 5- to 10-day window I offered in the letter. These districts presumably think they can get the information, and are willing to do so, but didn’t have a list handy. This is about 55% of respondents so far.

3) Deny the request outright, claiming no documents exist that describe districts’ textbooks. This is about 10% of respondents so far. About half of these denials are polite and abrupt, while the other half are nasty and threatening (e.g., we don’t have anything, but if you want to dispute you can reach our lawyer, and also we’ll charge you $XX an hour to produce the results). Of course I’m not letting them off the hook that easily, since I simply don’t believe that districts don’t have either purchase orders or school board minutes describing textbook selection.

4) Any of 1-3 combined with some griping about the process. This is about 5% of the respondents so far. The most typical gripe is that districts are overwhelmed with these kinds of requests and that this is an abuse of the system. I’m actually quite sympathetic to this argument, and I would obviously have preferred simply obtaining the information from the state. But as we’ve discussed, this isn’t possible. A related gripe is that there is not capacity in the district central office to obtain this information. I find this either terrifying (that a district central office (in a district with four schools) doesn’t know what textbooks are used) or hilarious (as in, a hilariously bogus excuse). One of these districts simply requested I email each of the individual school principals, which of course I’ll do.

In any case, it’s been a challenging but highly productive few days responding to these responses to my textbook FOIAs. By my count we’re at somewhere around a 15% response rate now, so quite a ways to go. It’ll be a long summer.


[1] Highly recommend Kiawah Island, South Carolina, as a destination.

Gathering textbook adoption data part 2 (or: even when it should be easy, it ain’t)

A few posts ago I wrote about the challenges in getting a study of school textbook adoptions off the ground. Suffice it to say there are many. This post continues in that thread (half of it is just me complaining, the other half is pointing out the absurdity of the whole thing, so if you feel like you’ve already got the idea, by all means skip this one).

California is one state that makes textbook adoption data public, bucking the national trend. This is a result of the Eliezer Williams, et al., vs. State of California, et al. case, where a group of San Francisco students sued the state, convincing the court that state agencies had failed to provide public school students with equal access to instructional materials, safe and decent school facilities, and qualified teachers. As a consequence of this court case, California schools are required to provide information on these issues in an annual School Accountability Report Card (SARC). The SARCs are publicly available and can be downloaded here.

This is great! Statewide, school-level textbook data right there for the taking.

Well, not so fast. For starters, with the exception of about 15% of schools that use a standard form, the rest turn in their SARCs in PDF form. And the state doesn’t keep a database on those 85% (obviously). So the only way to get the information off the SARCs is to do it manually, by copying from PDFs [1]. Thus, over the course of the last year, with the support of an anonymous funder, we’ve been pulling these data together.

As it turns out, even when you can pull the data from the SARCs, there are at least three major problems:

  • Large numbers of schools that simply don’t have a SARC in a given year. Apparently there must be some kind of exemption, because this would seem to violate the court ruling otherwise.
  • For schools those that do have a SARC:
    • Textbook data that are missing completely.
      • Or that are missing key elements, such as the grades in which they are used and the years of adoption.
    • Listed textbook titles that are so vague (e.g., “Algebra 1”, when the state adopts multiple books with that title) or unclear (e.g., “McGraw Hill”, when that company publishes numerous textbooks) as to be somewhat useless.

As a consequence, like in the NSF study, we’ll be reaching out to all districts with non-complete data via email or phone to fill in the gaps.

Of course the data will never be perfect, and they’re better than is available anywhere else. But if the purpose of the court ruling is to provide some measure of public accountability through the clear reporting of this kind of information, it’s not clear to me that the SARCs are currently fulfilling that role. Perhaps the state doesn’t care to or doesn’t have the manpower to enforce the ruling. That’s unfortunate, not because it makes this research more challenging, but because it deprives disadvantaged students of the remedy that the court has decided they are due.


[1] For reference, there are around 1000 districts and 10000 schools in California.

Monday Morning Alignment Critiques

As I’ve written about already, one of my main research interests these days is the quality and alignment of textbooks to standards. My recent work on this issue is among the first peer-reviewed studies (if not the first) to employ a widely-used alignment technique to rate the alignment of textbooks with standards. While I think the approach I use is great (or else I wouldn’t do it), it’s certainly not perfect. There are many ways to determine alignment; all of them are flawed.

Of course, there are others in this space as well. The two biggest players, by far, are Bill Schmidt and EdReports [1]. Both are well funded and have released ratings of textbook alignment. EdReports’ ratings have recently come under fire from many directions, including both publishers and, now, the National Council of Teachers of Mathematics. NCTM released a pretty scathing open letter, which was covered by Liana Heitin over at EdWeek, accusing EdReports of errors and methodological flaws.

I have three general comments about this response by NCTM.

The first is that there is no one right way to do an alignment analysis. While the EdReports “gateway” approach might not have been the method I’d have chosen, it seems to me to be a perfectly reasonable way to constrain the (very arduous) task of reading and rating a huge pile of textbooks. Perhaps they’d have gotten somewhat different results with a different method; who knows? But their results are generally in line with mine and Bill’s, so I doubt highly that their overall finding of mediocre alignment is driven by the method.

The second is that we need to always consider the other options when we’re evaluating criticisms like this. What kind of alignment information is out there currently? Basically you’ve got my piddly study of 7 books, Bill’s larger database, and EdReports [2]. Otherwise you have to either trust what the publisher says or come up with your own ratings. In that context, it’s not clear to me that EdReports is any worse than what else is available. And EdReports is almost certainly better than districts doing their own home-cooked analyses. The more information the better, I say.

The third point, and by far the most important, is that this kind of criticism is really not helpful in a time when schools and districts are desperate for quality information about curriculum materials. Schools and districts have been making decisions about these materials for years with virtually no information. Now we finally have some information (imperfect though it may be) and we’re nit-picking the methodological details? This completely misses the forest for the trees. If NCTM wants to be a leader here, they should be out in front on this issue offering their own evaluations to schools and districts. Otherwise it’s left to folks like EdReports or me to do what we can to fill this yawning gap by providing information that was needed years ago. Monday morning alignment critiques aren’t helpful. Actually getting in the game and giving educators information–that’d be a useful contribution.


[1] For the record, I participated in the webinar where EdReports’ results were released, but I have not been paid by them and don’t currently do any work with them.

[2] There’s probably other stuff out there I don’t know about.

The Impact of Common Core

It’s pretty much always a good idea to read Matt Di Carlo over at the Shankerblog. His posts are always thoughtful and middle-of-the-road, a refreshing antidote to usual advocacy blather. His recent post about the purpose and potential impact of the Common Core is no exception.

Here’s where I agree with Matt:

  • That standards alone are probably unlikely to have large impacts on student achievement.
  • That advocates of the standards do a disservice when they project such claims.
  • That making definitive statements about the impact of Common Core on student outcomes will be hard (and, I would say, causal research is almost certainly not worth doing at this point in the implementation process).

Here’s where I don’t agree with Matt. I don’t agree that standards are not meant to boost achievement. I believe that they most certainly are meant to boost achievement. Standards are intended to improve the likelihood that students will have access to a quality curriculum and, through that, learn more and better stuff. It’s a pretty straightforward theory of action, actually. Something like:

Standards (+ other policies) –> Improved, aligned instruction –> Student achievement

And I think we have pretty decent evidence on this theory of action. For instance, my work and the work of others makes it reasonably clear that standards can affect what and how teachers teach (albeit imperfectly). There’s a great deal of research on the very commonsense notion that what and how teachers teach affects what students learn (my study from last year notwithstanding). We don’t have studies that I’m aware of that draw the causal arrow directly from standards to achievement, but given the evidence on the indirect paths I believe this may well be due to the weaknesses of the data and designs more than the lack of an effect.

That said, I fully echo Matt’s concerns about overstating the case for quality standards, and I hope advocates take this warning to heart. What we need is not over-hyped claims and shoddy analyses designed to show positive impacts [1]. What we need at this point is thoughtful studies of implementation and cautious, tentative investigations of early effects. These are just the kind of studies that we are seeking in the “special issue” of AERA Open that I’m curating. My hope is that this issue will provide some of the first quality evidence about implementation and effects, in order to inform course corrections and begin building the evidence base about this reform.


[1] Edited to add: We also don’t need garbage studies by Common Core opponents using equally shoddy methods to conclude the standards aren’t working.

Research you should read – on the impact of NCLB

This is the first in what will be a mainstay of this blog–a discussion of a recent publication (peer-reviewed or not) that I think more folks should be reading and citing. Today’s article is both technically impressive and substantively important. It has the extremely un-thrilling name “Adding Design Elements to Improve Time Series Designs: No Child Left Behind as an Example of Causal Pattern-Matching,” and it appears in the most recent issue of the Journal for Research on Educational Effectiveness (the journal of the excellent SREE organization) [1].

The methodological purpose of this article is to add “design elements” to the Comparative Interrupted Time Series design (a common quasi-experimental design used to evaluate the causal impact of all manner of district- or state-level policies). The substantive purpose of this article is to identify the causal impact of NCLB on student achievement using NAEP data. While the latter has already been done (see for instance Dee and Jacob), this article strengthens Dee and Jacob’s findings through their design elements analysis.

In essence, what design elements bring to the CITS design for evaluating NCLB is a greater degree of confidence in the causal conclusions. Wong and colleagues, in particular, demonstrate NCLB’s impacts in multiple ways:

  • By comparing public and Catholic schools.
  • By comparing public and non-Catholic private schools.
  • By comparing states with high proficiency bars and low ones.
  • By using tests in 4th and 8th grade math and 4th grade reading.
  • By using Main NAEP and long-term trend NAEP.
  • By comparing changes in mean scores and time-trends.

The substantive findings are as follows:

1. We now have national estimates of the effects of NCLB by 2011.

2. We now know that NCLB affected eighth-grade math, something not statistically confirmed in either Wong, Cook, Barnett, and Jung (2008) or Dee and Jacob (2011) where positive findings were limited to fourth-grade math.

3. We now have consistent but statistically weak evidence of a possible, but distinctly smaller, fourth-grade reading effect.

4. Although it is not clear why NCLB affected achievement, some possibilities are now indicated.

These possibilities include a) consequential accountability, b) higher standards, and c) the combination of the two.

So why do I like this article so much? Well, of course, one reason is because it supports what I believe to be the truth about consequential standards-based accountability–that it has real, meaningfully large impacts on student outcomes [2][3]. But I also think this article is terrific because of its incredibly thoughtful design and execution and its clever use of freely available data. Regardless of one’s views on NCLB, this should be an article for policy researchers to emulate. And that’s why you should read it.


[1] This article, like many articles I’ll review on this blog, is paywalled. If you want a PDF and don’t have access through your library, send me an email.

[2] See this post for a concise summary of my views on this issue.

[3] Edited to add: I figured it would be controversial to say that I liked an article because it agreed with my priors. Two points. First, I think virtually everyone prefers research that agrees with their priors, so I’m merely being honest; deal with it. Second, as Sherman Dorn points out via Twitter, this is conjunctional–I like it because it’s a very strong analysis AND it agrees with my priors. If it was a shitty analysis that agreed with my priors, I wouldn’t have blogged about it.