If North Carolina’s Smart Start program were as effective in transforming education as it is in public relations, we’d have something to write home about. Unfortunately, its operators and “independent” evaluators have struck again with the high-profile release of a study that, upon closer inspection, still fails to demonstrate the benefits originally promised by Smart Start boosters.

As the state’s media organizations are reporting it, a new report from UNC-Chapel Hill’s Frank Porter Graham Child Development Institute – the official, and very friendly, evaluator of the Smart Start program – appears to provide the long-awaited evidence that investing taxpayer money in the preschool intervention is paying off in educational gains once preschoolers reach primary school. This is accomplished by the statistical equivalent of the transitive property: 1) current participation in Smart Start activities is associated with higher levels of measured quality among child care centers, 2) higher levels of measured quality among child care centers is associated with higher readiness scores for children once they reach kindergarten, so 3) Smart Start boosts children’s educational outcomes.

Sound persuasive? It might if one left it there. Fortunately, the Frank Porter Graham researchers, though clearly less-than-impartial in their views about Smart Start, are reputable scholars. Their brief report provides enough information for the interested reader to draw independent conclusions. Here’s an arresting quote from the report’s conclusion: “The study does not establish causality between Smart Start participation, child care quality, and child outcomes. Random assignment of centers to Smart Start and of children to centers is required to establish causality, but is not feasible for a community initiative such as Smart Start.”

Here’s where the trouble starts. As some of us warned early on, any serious effort to develop an effective preschool-intervention program for at-risk North Carolina preschoolers would have carefully set up either a series of random-assignment experiments – precisely what the UNC-CH researchers say was not “feasible,” even though it has been used for key educational research projects such as the Tennessee class-size experiments and a host of school-choice studies – or at least a series of “paired county” experiments in which some communities would have offered Smart Start interventions and other communities otherwise similar would not have. This would have created the prospect of testing whether, all other things being equal, the creation of a new, expensive preschool program would have educational value big enough to justify the cost. But this would also have interrupted former Gov. Jim Hunt’s timetable and required lawmakers to act wisely rather than parochially in rolling the program out.

You see, for most politicians Smart Start was a “success” as soon as it was announced and got its first round of state and national press attention. Proving that it actually provided lasting educational benefits – in the face of decades of experience with the federal Head Start program suggesting it probably wouldn’t – was never an important consideration, at least to the politicians.

Back to the new report. The evidence relating Smart Start participation to measures of child-care quality is fair. The correlation appears to be weak, but perhaps I’m reading it wrong. On the other hand, the link between child-care quality and kindergarten readiness appears to be strong. Still, this means less than you might think. The propensity to have your children in a good-quality center likely reflects levels of parental knowledge, involvement, and commitment that are not modeled in this study (poverty and race are, and show the usual patterns). These are precisely the qualities that help one’s children perform well in school. Good parenting could well be the variable that explains both child-care quality and kindergarten readiness.

More importantly, evidence of a statistically significant relationship between these variables is not evidence of an educationally significant relationship. I know that sounds weird, so let me clarify. It is quite possible, all things being equal, that attending a child care center that has experienced some quality improvements from Smart Start grants has a statistically significant impact on a child’s readiness to learn once in kindergarten. It would, in fact, be hard to imagine that there wouldn’t be some immediate benefits, as my colleague Karen Palasek pointed out in the AP’s story on the new report.

That’s not the point. Kindergarten isn’t the end of the schooling process. It’s the beginning. In order for Smart Start to be able to prove its educational merit, it would have to make such a large impact on children that subsequent years of schooling – subsequent years during which, it has been shown repeatedly, students in the same classrooms with similar backgrounds tend to perform similarly regardless of past preschool status – wouldn’t water down or eliminate the gain.

In a previous study, the same researchers found that some but not all Smart Start activities correlated with a statistically significant, but very small, gain in kindergarten readiness. The gain was probably too small to have any long-term bearing on educational outcomes. This time around, a similar pattern emerged. That is, researchers found a statistically significant link between child-care quality and kindergarten readiness scores. However, even they admit that the impact in both language and math readiness was “a small effect” in the context of educational research. No mention of this crucial finding is evident in the press release and promotional materials released by Smart Start on Tuesday. It might have tempered some of the enthusiasm to admit that the end result here is a “small effect” that seems likely to fade out by the third or fourth grade.

Smart Start proponents have a history of making extravagant claims, starting with Hunt’s own howler back in 1998 that North Carolina gains on national tests that year were partially attributable to Smart Start despite the fact that the youngest students being tested – fourth graders – were too old to have been exposed to the program. I’m not saying that this latest report doesn’t offer some useful insights. I’m just suggesting that skepticism is warranted, that accepting the report at face value still means Smart Start’s immediate benefits are “small” and its long-run benefits are speculative at best, and that the program’s original design has already ruined the best chance researchers would have had to probe the efficacy of state preschool interventions.

Probably just a coincidence, right?

Hood is president of the John Locke Foundation and publisher of Carolina Journal.