Here’s an unusual New York Times piece that says something about the art and science of polling, something about public attitudes on Iraq, and something about journalists’ attitudes.
The gist of it is that the New York Times/CBS News poll got a survey result earlier this month that surprised journalists at the two organizations. In a poll that was largely concerned with Hillary Clinton they asked some now routine questions about the Iraq war and saw an increase in the percentage of Americans who believe the original invasion was the right thing to do.
So surprised and doubtful were they that they did a second poll. The poll keepers wondered if including the Iraq invasion question in a poll about Clinton might have influenced the answers somehow.
But the second poll, free of extraneous influences, confirmed the first, and also found a drop in the number of people who think the war is going badly.
A few observations and questions:
There’s nothing unlikely about these pollsters’ suspicion about what might have gone wrong. That the proximity of certain questions can influence respondents’ answers to other questions is a constant challenge and hazard in polling, particularly polling on issues, as opposed to horse race election polls.
Chances are, such influences happen pretty often — inadvertantly in quality polls, and maybe not so inadvertantly in advocacy group polls. Most of the time, though, the result won’t be an outcome that stands out as inexplicable. But poll consumers should keep the risk in mind.
What could explain even a modest change for the better in Americans’ feelings about the war? Confidence in Petraeus? Reports that the surge is having some success in some parts of Iraq? Something in the war debate in the developing presidential race or on Capitol Hill? Whether the questions matter of course will depend on whether additional polls confirm any change in attitudes.
Does it reveal anything notable about journalists’ predispositions that this result –a slight improvement in attitudes toward the war — seemed so very odd and unlikely that they went to the unusual effort and cost of double checking their poll? Or would any careful observer have been surprised by the result?