I know the following plea is probably futile. But I’m going to offer it anyway. Can we please stop making every event into a partisan slugfest?
Case in point would be the recent release of test scores from the National Assessment of Educational Progress. In North Carolina’s case, Republicans highlighted an uptick in a test score from 2017 to 2019. Democrats pointed to stagnation in average scores since 2011. Advocates and critics of various reforms, from teacher-pay hikes to Read to Achieve, cited the new NAEP results as they restated their preexisting talking points.
The result was a confusing mess. To the extent they were exposed to it, average North Carolinians were poorly served. The cause of educational progress was poorly served, too.
In reality, the 2019 NAEP scores in reading and math weren’t revelatory. They largely reinforced two facts that fair-minded analysts have recognized for many years. First, North Carolina’s schools are more effective than those of most other states. Second, student achievement has shown little improvement in the past decade — indeed, when it comes to reading, there’s been no improvement so far this century.
While information about schools comes in a variety of forms, NAEP results attract — and deserve — special attention because they employ the same tests given across the country in the same grades and subjects. States have few means of manipulating the scores to make themselves look better. And because NAEP and others collect information about the test-taking population in each state, there is less of a chance of drawing false positives or negatives about school performance.
That’s a critical point. Surely we all recognize that simply eyeballing test scores, graduation rates, and other outcome statistics cannot tell us whether a given school is effective. Many factors heavily influence student performance, such as household income and family structure. Schooling matters, of course, but children spend much more time outside classrooms than they do inside them.
When educators and scholars try to figure out which schools, teachers, or practices confer the greatest benefit on student success, they control for these background characteristics. I’ve written before about the Urban Institute’s handy “America’s Gradebook” tool that adjusts the NAEP data automatically. It’s now been updated with the 2019 scores.
Adjusting for student background, North Carolina’s eighth-graders rank third in the nation in mathematics performance. Our fourth-graders rank seventh. In reading, North Carolina’s eighth-graders rank 11th and our fourth-graders rank sixth.
Should North Carolinians be satisfied with these rankings? Of course not. We should aspire to the highest level of school effectiveness, currently occupied by the likes of Florida, Massachusetts, and New Jersey. And the real world of employment and citizenship doesn’t adjust for family background.
Nevertheless, the fact that North Carolina is significantly above average when it comes to the estimated effectiveness of schools — as distinguished from the average performance of students — ought to serve as a helpful corrective to the hyperbole that so often pervades political debates about education policy.
On the other hand, it should comfort no one that North Carolina has seen no lasting improvement in eighth-grade math performance since 2011 or in eighth-grade reading performance since 2000, despite multiple attempts at education reform. Oh, the scores have ticked up or down a point or two, but keep in mind that NAEP is based on samples of students in each state. Such changes have generally not been statistically significant (although that hasn’t kept headline writers and politicians from treating them as meaningful). And even after adjusting for the composition of North Carolina’s student population, we ranked roughly the same in 2019 as we did in the early 2000s.
To the extent education reforms succeed, they usually do so in a gradual way. Even positive changes in school setting, curriculum, teaching practices, or other policies can take many years to produce measurable effects.
I began this column with a politically unpopular request, so I suppose I might as well end with two more: don’t oversimplify and don’t jump to conclusions.