RALEIGH – Most large institutions are better than spending money on shiny new programs and favored constituencies than they are at spending money on evaluating their existing programs. The problem is evident at large companies. It’s evident at large nonprofits such as universities and hospitals. It’s evident in the military and the church.

Governments are particularly prone to skimp on evaluation, however, because their managers generate future revenue not by outperforming their competitors or capturing the payoff of innovation but by winning reelection. You can do that with lofty rhetoric, crafty redistricting, and canny fundraising. You don’t have to prove that your past programs have yielded the benefits you promised.

Still, one can always hope that careful evaluation will reveal the successes and failures of government programs – and that the findings will then pass from the pages of audits and independent studies to the briefing books of policymakers and the news diet of voters.

Of course, “one” tends to cling to such a hope when “one” works at a think tank that, among other things, evaluates government programs. During the John Locke Foundation’s first 20 years – yes, that’s right, JLF is about to celebrate its China Anniversary with a January 13th banquet in Cary – our analysts have released evaluations of programs ranging from school reforms to economic-development policies. Sometimes they’ve been ignored completely. Sometimes public officials have responded defensively and attempted to rebut the findings. But sometimes, our work has helped guide state and local policymakers to reform programs, as happened during the 1990s with welfare reform and taxes and the 2000s in campaign-finance and ethics disclosure.

Unfortunately, human nature intrudes. Rare is the politician who will, after passionately advocating the creation of a pet program, turn around and radically reform or eliminate the program after an evaluation questions its effectiveness.

A classic example is preschool intervention. For decades now, both liberal and not-so-liberal politicians in Washington and Raleigh have clung to the plausible and promising notion that spending tax money early on early childhood education can save money in the long run by boosting high-school graduation rates and reducing rates of future crime, joblessness, and welfare dependency.

The notion is plausible in part because some early laboratory experiments of preschool intervention demonstrated long-term benefits with a few dozen test subjects. And it’s promising because so many other attempts at improving the lives of disadvantaged students – ranging from in-school reforms to various public-assistance programs – have proven to cost more and deliver less than expected.

The political fascination with preschool intervention began in the 1960s with Head Start, then deepened during the past two decades with state-initiated programs such as North Carolina’s own Smart Start in the 1990s and More at Four in the 2000s.

Alas, there was little serious effort made at evaluating the effectiveness of these programs. The priority was to expand them as rapidly as possible, as broadly as possible. In a legislative process run by members representing discrete geographical units, expecting some lawmakers to be willing to wait while others see new pilot programs creating in their districts proved unrealistic.

In the case of preschool intervention, the result has been the expenditure of billions of dollars over the past two decades with little evidence of gain. As I’ve noted, the major improvements in North Carolina’s performance on independent reading and math tests predated the statewide implementation of Smart Start and More at Four. After these programs went in effect, the state’s academic performance stalled out.

As for Head Start, the Heritage Foundation’s Dan Lips related the history of the federal government’s program evaluation in a recent column. To make a long story short, in 1998 Congress ordered a new program evaluation of Head Start. The initial one, released in 2005, showed modest gains for youngsters right after participating in Head Start. That was no surprise. The real question has always been: do gains in preschool last into elementary school? In the past, the answer has been no.

So what’s the answer this time? Well, the next report was supposed to be released in March 2009, but so far no release. Lips thinks he knows why:

Former HHS officials have told me that they were briefed on the results of the first-grade evaluation in 2008. They report that the evaluation found that, overall, Head Start participants experienced zero lasting benefits compared to their non-Head Start peers by the end of first grade. These officials expressed little surprise that the report’s release had been delayed.

I’m not surprised, either. Disappointed? Yes.

Hood is president of the John Locke Foundation