As I’ve written about already, one of my main research interests these days is the quality and alignment of textbooks to standards. My recent work on this issue is among the first peer-reviewed studies (if not the first) to employ a widely-used alignment technique to rate the alignment of textbooks with standards. While I think the approach I use is great (or else I wouldn’t do it), it’s certainly not perfect. There are many ways to determine alignment; all of them are flawed.
Of course, there are others in this space as well. The two biggest players, by far, are Bill Schmidt and EdReports . Both are well funded and have released ratings of textbook alignment. EdReports’ ratings have recently come under fire from many directions, including both publishers and, now, the National Council of Teachers of Mathematics. NCTM released a pretty scathing open letter, which was covered by Liana Heitin over at EdWeek, accusing EdReports of errors and methodological flaws.
I have three general comments about this response by NCTM.
The first is that there is no one right way to do an alignment analysis. While the EdReports “gateway” approach might not have been the method I’d have chosen, it seems to me to be a perfectly reasonable way to constrain the (very arduous) task of reading and rating a huge pile of textbooks. Perhaps they’d have gotten somewhat different results with a different method; who knows? But their results are generally in line with mine and Bill’s, so I doubt highly that their overall finding of mediocre alignment is driven by the method.
The second is that we need to always consider the other options when we’re evaluating criticisms like this. What kind of alignment information is out there currently? Basically you’ve got my piddly study of 7 books, Bill’s larger database, and EdReports . Otherwise you have to either trust what the publisher says or come up with your own ratings. In that context, it’s not clear to me that EdReports is any worse than what else is available. And EdReports is almost certainly better than districts doing their own home-cooked analyses. The more information the better, I say.
The third point, and by far the most important, is that this kind of criticism is really not helpful in a time when schools and districts are desperate for quality information about curriculum materials. Schools and districts have been making decisions about these materials for years with virtually no information. Now we finally have some information (imperfect though it may be) and we’re nit-picking the methodological details? This completely misses the forest for the trees. If NCTM wants to be a leader here, they should be out in front on this issue offering their own evaluations to schools and districts. Otherwise it’s left to folks like EdReports or me to do what we can to fill this yawning gap by providing information that was needed years ago. Monday morning alignment critiques aren’t helpful. Actually getting in the game and giving educators information–that’d be a useful contribution.
 For the record, I participated in the webinar where EdReports’ results were released, but I have not been paid by them and don’t currently do any work with them.
 There’s probably other stuff out there I don’t know about.