The pundits pounced. Exit polls from the just-concluded New Hampshire primary had indicated that Bernie Sanders walloped Hillary Clinton, 92-to-6, among Democratic presidential primary voters who considered honesty to be the most important factor in choosing a candidate.

While talking heads on one of the nation’s leading cable television networks debated that statistic’s impact on the Democratic nomination contest, this observer waited for one more crucial piece of information. It never arrived.

Sanders’ wide margin over Clinton in the “honesty” department certainly qualifies as news. But its significance depends to a large degree on one additional data point: How many people consider a candidate’s “honesty” to be the most important factor?

If 75 percent of voters label “honesty” their No. 1 priority, that means bad news for the former secretary of state’s presidential bid. If 40 percent consider a candidate’s honesty before any other factor, that’s still an obstacle for a candidate who loses that battle by a roughly 15-1 margin.

But what if only 5 percent of voters believe “honesty” trumps all other factors? Then Clinton’s honesty deficit, while disturbing, is likely to have less impact on her overall electoral prospects.

The professional TV prognosticators never mentioned that crucial data point, at least before my remote flipped to another channel.

The omission reminds us of an important lesson: Context is crucial when evaluating poll results.

Context proves relatively easy in the case of a basic election poll. Voters have a limited number of choices — candidate X or candidate Y, “yes” or “no” on a bond referendum or constitutional amendment. Even when voters can choose from among more than two options, they typically can make just one choice. As long as the polling sample mirrors the universe of likely voters, the poll can paint a reasonably accurate picture of the likely election result.

Matching the sample to the universe of voters can be tricky, of course, and pollsters tend to earn kudos or condemnation depending on how well they handle that task during each election cycle.

But that’s a far different challenge than those tied to other types of polls. As soon as the subject of a poll moves away from a clear choice between options A and B, context plays a much more important role.

Take the earlier example of “honesty” voters in the New Hampshire Democratic presidential primary election. Not only is it important to know how many people consider a candidate’s honesty to be the No. 1 factor driving their vote; it’s also important to know how that factor interacted with others.

Among the 92 percent of “honesty” voters who supported Sanders, how many also preferred his policy proposals to Clinton’s? How many supported Sanders because of his honesty and despite their opposition to his policy proposals? Those are important questions to ask to place the “honesty” vote in the proper context. If the honesty voters were already “feeling the Bern,” it’s unlikely Clinton could count on them under any circumstances.

Context becomes even more critical when a poll involves issues rather than elections. Unlike the limited choices at the ballot box, people hold a range of views on most divisive issues. Polls mislead when they fail to convey that range accurately.

The way pollsters structure a question affects the outcome, and a poorly designed question can generate relatively meaningless results.

Take, for instance, the recent poll that suggested 72 percent of North Carolinians support Medicaid expansion. Let me rephrase that: Advocates of Medicaid expansion claimed that a recent poll suggested 72 percent of North Carolinians supported their cause.

A closer look at the poll question shows that 72 percent of those responding “think North Carolina should make a plan to fix the health insurance gap,” with no reference to expanding Medicaid.

Those who designed the poll question admitted that they intentionally left out the politically charged word “Medicaid.” The goal was to avoid skewing poll results. Yet media outlets and other expansion advocates were happy to fill in the apparent blanks in order to boost the cause.

Even issues that appear to offer cut-and-dried answers can yield poll results that offer anything but clarity.

A pollster can construct a yes-or-no question about legalized abortion, but the results will offer little evidence about the reality of people’s range of views on the topic. Those dead set against abortion under all circumstances will answer no. Those who believe a woman should have the right to abort an unborn child at any point will answer yes.

But what about those who oppose the practice while agreeing to limited exceptions in cases of rape, incest, and threats to the mother’s life? Or those who support abortion only up to a certain point of the unborn child’s development? Forced to answer “yes” or “no,” these people will choose one side or the other for polling purposes without shedding much light on their actual preferences.

There’s a similar story for another hot-button topic: the death penalty. Breaking the issue down to a simple “yes/no” division for polling purposes fails to account for factors such as the offender’s age, the likelihood that a life sentence could be commuted, or concerns about the way the death penalty has been applied in past cases. Each factor can sway people toward one side or the other. Not accounting for those factors leaves pollsters with incomplete results.

Polling can tell us a lot, but it’s important to remember that the numbers often fail to tell the whole story.

Mitch Kokai is senior political analyst for the John Locke Foundation.