Do the content and quality of state tests matter?

Over at Ahead of the Heard, Chad Aldeman has written about the recent Mathematica study, which found that PARCC and MCAS were equally predictive of early college success. He essentially argues that if all tests are equally predictive, states should just choose the cheapest bargain-basement test, content and quality be damned. He offers a list of reasons, which you’re welcome to read.

As you’d guess, I disagree with this argument. I’ll offer a list of reasons of my own here.

  1. The most obvious point is that we have reasonable evidence that testing drives instructional responses to standards. Thus, if the tests used to measure and hold folks/schools accountable are lousy and contain poor quality tasks, we’ll get poor quality instruction as well. This is why many folks are thinking these days that better tests should include tasks that are much closer to the kinds of things we want kids to actually be doing. In that case, “teaching to the test” becomes “good teaching.” May be a pipe dream, but that’s something I commonly hear.
  2. A second fairly obvious point is that switching to a completely unaligned test would end any possible notion that the tests could provide feedback to teachers about what they should be doing differently/better. Certainly we can all argue that current test results are provided too late to be useful–though smart testing vendors ought to be working on this issue as hard as possible–but if the test is in no way related to what teachers are supposed to be teaching, it’s definitely useless to them as a formative measure.
  3. Chad’s analysis seems to prioritize predictive validity–how well do results from the test predict other desired outcomes–over all the other types of validity evidence. It’s not clear to me why we should prefer predictive validity (especially when we already have evidence that GPAs do better at that than most standardized tests, though SAT/ACT adds a little) over, say, content-related validity. Don’t we first and foremost want the test to be a good measure of what students were supposed to have learned in the grade? More generally, I think it makes more sense to have different tests for different purposes, rather than piling all the purposes into a single test.
  4. Certainly if the tests are going to have stakes attached to them, the courts require a certain level of content validity (or what they’ve called instructional validity). See Debra P. v. Turlington. If a kid’s going to be held accountable, they need to have had the opportunity to learn what was on the test. If the test is the SAT, that’s probably not going to happen.

Anyway, take a look at the Mathematica report (you should anyway!) and Chad’s post and let me know what you think.

Advertisements

Not playing around on play

This weekend’s hot opinion piece was the New York Times’ “Let the Kids Learn through Play,” by David Kohn. This piece set up the (fairly tired) play vs. academics dichotomy, citing a panoply of researchers and advocates who believe that kindergarten has suddenly become more academic (and Common Core is at least partly to blame).

There will undoubtedly be many takedowns of this piece. An early favorite is Sherman Dorn’s, which notes the ahistorical nature of Kohn’s argument. Another critique I noticed going around the Twittersphere centered on the fact that there’s far more variation in kindergarten instruction among classrooms than there is between time periods (almost undoubtedly true, though I don’t have a link handy).

Early childhood is not my area, so I can’t get too deep on this one, but I did have a few observations.

  1. I think the evidence is reasonably clear at this point that kindergarten is becoming more “academic.” Daphna Bassok has shown this using nationally representative data, and I have found it in my own analyses as well. This means both that kids are spending a greater proportion of their time on academic subjects, and also that instruction within subjects is becoming more concentrated on more “traditional” approaches (e.g., whole class, advanced content) and less concentrated on more student-directed approaches.
  2. Any time you read an op-ed and you think “if they just flipped the valance on all these quotes, I bet they could find equally prominent researchers who’d support them,” you know you don’t have an especially strong argument. To put it mildly, my read of this literature is that it is far more contested than is described here. For instance, Mimi Engel and colleagues have several studies demonstrating that some of the advanced instructional content that comes under fire in Kohn’s piece and in the anti-academic-kindergarten crowd is the content most associated with greater student learning and longer-term success. Now, that doesn’t mean there might not be some tradeoffs (though I’d like to see those demonstrated before I’m willing to acknowledge them), but the literature is clearly not as one-sided as was portrayed here (and it may even be that the bulk of the quality evidence falls on the other side of this argument).
  3. As Sherman points out, this is also a ridiculous false dichotomy that is quite unhelpful. I don’t think anyone envisions kindergarten classes where students are mindless drones, drilling their basic addition facts all day long. Rather, many believe, and I think evidence suggests, that kindergarten students can handle academic content and that early development of academic skills can have long-lasting effects. Daphna perhaps put it best in an EdWeek commentary (you should read the whole thing if you haven’t already):

Our own research shows that children get more out of kindergarten when teachers expose them to new and challenging academic content. We are not arguing that most kindergartners need more exposure to academic content. At the same time, exposure to academic content should not be viewed as inherently at odds with young children’s healthy development.

I think this is exactly the right view, and one that was missed in the Times over the weekend.


[1] Save that link! I’m sure if you change the policy under question you can apply the text almost verbatim to most education op-eds.