I spent a good amount of time in graduate school studying polling and following the debates over the proper way to poll. I recall in the middle part of the decade there was a long debate among pollsters about what factors should be weighed and what factors should be measured. In particular was the question of party indentification.
There are adherents to both sides of that view, but as one might guess, properly setting that number in one’s sample makes a huge difference in terms of the accuracy polls.
With the advent of the internet age, one thing a lot of political observers do is, rather than look at a single poll, look at the average across polls. That tends to reduce survey to survey noise and show the overall average. Another approach is simply forget the raw numbers and look for the trend in the individual poll. The logic there is the internal dynamics of a given poll are consistent and you can simply assess whether the candidate is doing better or worse by tracking one poll over time.
While in aggregate I prefer the average approach, it is worth noting that not all polls perform the same. Some polls are more accurate over time and some pollsters are more accurate one year than another.
It is worth noting in my article on the Field Poll that one of the conservative bloggers showed the Rasmussen Poll, which showed a much more modest 3 to 4 point lead for both Brown and Boxer. I already knew that the Rasmussen Poll is almost a Republican poll. It is contracted by Fox and it tends to be most favorable to Republicans. Rasmussen showing both candidates with a lead indicated that they indeed had a solid lead.
It turns out the data and results back up my hunch at the time.
On November 3, the LA Times blog did a piece on which pollsters called California’s top races correctly.
They found that LA Times/ USC Poll and the Field Poll were correct in projecting comfortable wins, and in which the Rasmussen poll did the worst.
Of interest was the fact that the Republican candidate attacked the Times/ USC Poll “saying incorrectly that Times polls always favored candidates the paper had endorsed.”
However, the paper got the last laugh as, “In the end, Brown won by 12 points and Boxer by nine. The poll that came closest to nailing the results: The L.A. Times/USC survey, which had projected a 13-point margin for Brown and an eight-point margin for Boxer. Field, which had projected margins of 10 points for Brown and eight for Boxer, came in a close second.”
Rasmussen, as I said, did the worst. “The worst record? The Rasmussen surveys, which were conducted for Fox News and Rasmussen’s own survey website [sic]. Those polls projected a Boxer margin of three points and a Brown win by four.”
One big factor is that the Field Poll and the Times/ USC Poll are now using both landlines and cellphones, whereas the Rasmussen is not. That means that Rasmussen is less likely to reach younger voters who tend to lean more Democratic.
In our household we have a landline, but we rarely answer it, using instead our cellphones as our primary phone and our landline is primarily a back-up and a fax line.
The other key difference is the determination of who is a likely voter. Writes the blog, “The Times/USC survey based its likely-voter model on questions about a person’s enthusiasm about voting this year, the respondent’s expressed certainty about voting and his or her voting history. Some Republican analysts said that the emphasis on past voter history was screening out Republicans who had not voted in 2006 and 2008 but who would show up this year. In the end, those hypothetical voters turned out to be something of a mirage. Exit polls this year showed an electorate that was quite similar to the group that voted in the 2006 midterm elections.”
All of this of course means that polling is an art, not completely a science, and they have to model what the expected voter universe is and get it correct. Flawed assumptions about the voting population will make for a flawed or skewed poll result.
It turns out nationally, that Rasmussen did quite poorly overall and observers ought to be cautious before citing the Rasmussen poll as evidence of anything more than a trend. Nate Silver, who is a polling guru and has his own website FiveThirtyEight.com wrote a blog in the NY Times on November 4 .
His research found, “On Tuesday, polls conducted by the firm Rasmussen Reports — which released more than 100 surveys in the final three weeks of the campaign, including some commissioned under a subsidiary on behalf of Fox News — badly missed the margin in many states, and also exhibited a considerable bias toward Republican candidates.”
He continues, “The 105 polls released in Senate and gubernatorial races by Rasmussen Reports and its subsidiary, Pulse Opinion Research, missed the final margin between the candidates by 5.8 points, a considerably higher figure than that achieved by most other pollsters. Some 13 of its polls missed by 10 or more points, including one in the Hawaii Senate race that missed the final margin between the candidates by 40 points, the largest error ever recorded in a general election in FiveThirtyEight’s database, which includes all polls conducted since 1998.”
Moreover, not only were they inaccurate, they were biased. Mr. Silver writes, “Rasmussen’s polls were quite biased, overestimating the standing of the Republican candidate by almost 4 points on average.”
He continues, “In just 12 cases, Rasmussen’s polls overestimated the margin for the Democrat by 3 or more points. But it did so for the Republican candidate in 55 cases — that is, in more than half of the polls that it issued.”
“If one focused solely on the final poll issued by Rasmussen Reports or Pulse Opinion Research in each state — rather than including all polls within the three-week interval — it would not have made much difference. Their average error would be 5.7 points rather than 5.8, and their average bias 3.8 points rather than 3.9,” writes Silver.
Mr. Silver found no difference between those labeled as Rasmussen versus those commissioned for Fox News or its subsidary. Instead he argued, “Both sets of surveys used an essentially identical methodology.” And found, “Polls branded as Rasmussen Reports missed by an average of 5.9 points and had a 3.9 point bias. The polls it commissioned on behalf of Fox News had a 5.1 point error, and a 3.6 point bias.”
Rasmussen polls were increasingly criticized during the election cycle.
“We have critiqued the firm for its cavalier attitude toward polling convention,” according to Mr. Silver. “Rasmussen, for instance, generally conducts all of its interviews during a single, 4-hour window; speaks with the first person it reaches on the phone rather than using a random selection process; does not call cellphones; does not call back respondents whom it misses initially; and uses a computer script rather than live interviewers to conduct its surveys. These are cost-saving measures which contribute to very low response rates and may lead to biased samples.”
This gets to the point I made earlier about the debate over assumptions of party indentification. Rasmussen anchors the assumptions, which could potentially distort the results if they overestimate the partisan breakdown of the electorate.
Writes Mr. Silver, “Rasmussen also weights their surveys based on preordained assumptions about the party identification of voters in each state, a relatively unusual practice that many polling firms consider dubious since party identification (unlike characteristics like age and gender) is often quite fluid.”
However, Rasmussen has not consistently been poor, only this year, though it seems they have always been more conservative in their predictions. “Rasmussen’s polls — after a poor debut in 2000 in which they picked the wrong winner in 7 key states in that year’s Presidential race — nevertheless had performed quite strongly in in 2004 and 2006. And they were about average in 2008. But their polls were poor this year.”
He continues, “The discrepancies between Rasmussen Reports polls and those issued by other companies were apparent from virtually the first day that Barack Obama took office. Rasmussen showed Barack Obama’s disapproval rating at 36 percent, for instance, just a week after his inauguration, at a point when no other pollster had that figure higher than 20 percent.”
“Rasmussen Reports has rarely provided substantive responses to criticisms about its methodology,” Mr. Silver reports. “At one point, Scott Rasmussen, president of the company, suggested that the differences it showed were due to its use of a likely voter model. A FiveThirtyEight analysis, however, revealed that its bias was at least as strong in polls conducted among all adults, before any model of voting likelihood had been applied.”
“Some of the criticisms have focused on the fact that Mr. Rasmussen is himself a conservative — the same direction in which his polls have generally leaned — although he identifies as an independent rather than Republican,” Mr. Silver writes. “In our view, that is somewhat beside the point. What matters, rather, is that the methodological shortcuts that the firm takes may now be causing it to pay a price in terms of the reliability of its polling.”
None of this means that in the next election cycle Rasmussen cannot fix some of the problems in their polling. My recommendation to people is to look at about a three-week window of polls, and see where the various polling companies rank in terms of margin for Democrats versus Republicans. If you see a polling company consistently at the bottom or top, one should use caution about looking at their polls.
I still think an average poll score is the best measure, as it tends to blur the individual methodological issues and also produce much less week-to-week noise. Short of that, pick a poll with a good reputation over time or in the middle in terms of the spread.
—David M. Greenwald reporting