Michael Cohen's Effort To Rig Reader 'Polls' Shows Exactly Why They Mean Nothing

If a "poll" doesn't control who takes it and how many times they do so, it's not really a poll.
LOADINGERROR LOADING

Michael Cohen, President Donald Trump’s former lawyer, promised an IT company tens of thousands of dollars to influence online reader polls ― like the ones conducted on the Drudge Report ― in Trump’s favor, according to The Wall Street Journal. The results of those polls were often regurgitated by media outlets, conservative figures like Sean Hannity and Trump himself.

That Cohen was able to cook up such a plan highlights the glaring problems with reader polls, and why — as we’ve written previously — they shouldn’t be confused with real surveys.

Scientific polling, whether conducted by phone, using an online panel, or in some other fashion, is fundamentally designed to be representative. It relies on some mix of sampling (choosing who’s selected to take the survey) and weighting (adjusting the data to account for the fact that some types of people are more likely than others to respond).

Recent changes in technology have complicated that process, and even rigorous polling is far from infallible. But there’s an enduring, basically sound underlying principle: making the pool of respondents look as much as possible like the larger population whose opinions the pollsters are trying to measure.

“While they’re fun to take, reader polls fail on multiple levels as tools to gauge public sentiment.”

That’s why, even if a scientific poll reaches only a thousand or so people, it can reflect the opinions of a broader group, whether that’s something like registered Republican voters or the American public as a whole.

These techniques, of course, require pollsters to have a basic ability to control who takes their survey and to monitor the demographics of those who respond.

Reader polls, by contrast, offer none of that. While they’re fun to take, they fail on multiple levels as tools to gauge public sentiment. The people reading any particular website aren’t representative of the public at large. Those who take the time to read a particular story ― and to weigh in on it ― are even less so.

Would you be surprised, for example, to learn that HuffPost readers who clicked on a story about partisan acrimony are more politically argumentative than the average American?

Perhaps most troublingly, because reader polls have no means of gatekeeping or measuring who responds, they’re intensely vulnerable to intentional manipulation by people with a vested interest in the outcome, whether that’s someone like Cohen or online trolls. (This is also how you end up at risk of being told to name your research ship “Boaty McBoatface” or your soccer team “Footy McFooty Face.”)

The problem with reader polls isn’t that they’re conducted online. In this day and age, there are plenty of scientific web-based pollsters with procedures in place to conduct representative surveys. The problem with reader polls is the lack of any provisions for sampling or weighting and the lack of even basic safeguards against being gamed by online mobs.

As Democratic primary season heats up, it’s likely that similar types of reader polls will start to make a reappearance as well. But even when a campaign isn’t making a systematic effort to rig them, they are still of extremely limited utility for understanding what Americans think.

To sum up: Any “poll” that does not exert some measure of control over who takes it and how many times they do so is not really a poll, and it shouldn’t be treated like one.

Popular in the Community

Close

What's Hot