News: CJ Exclusives

Debate Continues on Exit Exams

Lawmakers dropped the high-school exit exam amid concerns about graduation

When North Carolina legislators decided to drop the planned high school graduation exit exam for 11th graders, they weren’t necessarily saying that there should be no final requirement for graduation. But with the four-year graduation rate at just 60 percent, they also weren’t willing to see it go lower on the basis of one high-stakes test. The idea of a high-stakes exit exam is, after all, to inspire better performance from schools, teachers, and students.

In language designed to limit the number of required tests each school year, the legislature barred the State Board of Education from imposing “any additional standardized tests beyond those that were administered in the 2002-2003 academic year.” The high-stakes 11th-grade exit exam was to be implemented starting in the spring 2004. For the first time, the results would have determined a student’s eligibility for graduation.

Instead, the state will keep, for now, the exam procedure it has used in the past. To be eligible to graduate, students must take the required courses and pass the eighth-grade reading and math competency tests.

Students can take the high school competency tests beginning in the fall of their ninth-grade year, with the provision that “students who fail to attain the required minimum standard for graduation in the ninth grade shall be given remedial instruction and additional opportunities to take the test up to and including the last month of the 12th grade.” If students fail part of a test, they need retake only that part.

The competency tests are hardly the kind of all-or-nothing hurdle that parents and educators fear will knock kids out of contention to graduate. Still, the state is rethinking the need to have an exit measurement for all students about to receive a diploma.

According to reports in The News & Observer of Raleigh, State Schools Superintendent Mike Ward and State Board Chairman Howard Lee are considering pre-graduation alternatives. One option might be to use the end-of-course tests, which currently count for only 25 percent of a student’s grade, but to increase the weighted effect on student grades. Increasing the weight attached to these exams doesn’t violate the prohibition against new exams. Another option could be a senior project, already a requirement in some schools.

The difficulty that education officials face involves creating a tough exit measurement, but not one so tough that it leads to lower graduation rates, or to grades inflated just enough to get students through.

Pushed out or pulled up?

Exit exams have been under attack recently because educators fear that higher standards translate into fewer graduates. According to statistical research on a number of factors that might affect student outcomes, neither exit exams, class size, spending, nor the secondary student-teacher ratio had a significant effect on high school completion.

A May 2004 study by the Manhattan Institute, “Pushed Out or Pulled Up? Exit Exams and Dropout Rates in Public High Schools,” specifically looked for factors that might improve achievement without causing more students to drop out.

According to Jay Greene and Marcus Winters, authors of the study, under two different methods of calculation, “[T]he results for both graduation rate calculations show that adopting a high school exit exam has no effect on a state’s graduation rate.”

One method for calculating the graduation rate relies on the National Center for Educational Statistics data set, which compares national graduation rates over time. The method is not perfect, according to Greene, but is it well-respected and free of the problems of some of the alternative methods.

A second method was developed by Greene for use in earlier studies. The Greene method divides the number of diplomas awarded… in a given year by the estimated number of students who entered the ninth grade four years earlier, according to the report. Because of possible “jags” in enrollment from year to year, Greene’s method has a statistical component to adjust for anomalies in the number of students from year to year.

Eighteen states have had some type of exit exam in place since at least 1980, which allowed for at least 10 years worth of data. The exams, of different types, all fit the requirement that students pass successfully before graduation.

No difference

According to “Pushed Out or Pulled Up?” either calculation method gives the same result: Exit exams are not responsible for lowering graduation rates. States that are leery of a drop in graduates if they adopt an exit standard can breathe easier over the prospect.

But is that true? Some critics argue that more recent tests have raided the stakes by adopting higher standards and more difficult material. Maybe the 1980s results aren’t valid.

Not so, according to Greene and Winters. The analysis found no statistical relationship between the year a test was given and graduation rates, so that “current tests are having the same null effect on graduation rates as the graduation tests of the past.”

This is an interesting conclusion. Apparently, tougher tests don’t cause fewer students to graduate. They speculate that the tougher tests may translate into pressure to improve. Even if they don’t, the “meaningfulness” of a diploma should increase with harder exit exams.

A final note on the reality of using exit exams to weed out students who are not prepared to graduate — the authors argue that they would most likely have failed even without the exam. If there is any positive reason to have an exam in place, the authors suggest that it could force schools to address the low-performing end of their student spectrum.

“Exit exams force schools to focus their time and resources on low-achieving students they previously ignored. This improved use of resources causes some students to earn their diplomas who otherwise would have dropped out,” Greene and Winters write in an op-ed for the Indianapolis Star. So the tests may act as a kind of quality control even if they have no other measurable consequences.

Quality of existing standards

Abandoning the originally planned exit exam is not the final step in North Carolina’s high school accountability efforts, Lee said. “I really do think we need a strong exit measurement,” he told The News & Observer of Raleigh. “I’m not sure what that is yet — is it an exam, is it a senior project?”

Existing standards for K-12 accountability are rated “fair” for North Carolina by the Thomas B. Fordham Foundation and Accountability Works. These organizations scrutinize standards, curriculum, and accountability measures.

In ”Grading the Systems: The guide to state standards, tests, and accountability policies,” Theodor Rebarber, Richard Cross, and Justin Torres summarize a 30-state study. They find that for all 30, the accountability systems “may best be described, on average, as mediocre.”

North Carolina’s results for standards, testing, and accountability policies were “fair” on average, but the scores vary widely depending on elementary, middle or high schools.

Tests in the state received a “solid” mark, but evaluators didn’t have access to high school tests for the study. That was a factor in its “poor” rating on test trustworthiness and openness. For the same reason, the authors could not gauge the rigor of existing tests.

Over all, North Carolina continues to flirt with a soundly positive review of its testing and accountability. The Fordham study indicates that it could be outstanding, but without openness, how to proceed from here is still subject to speculation.

Palasek is a policy analyst at the John Locke Foundation and associate editor at Carolina Journal.