EditorialsPREMIUM

EDITORIAL: Opinion poll perils and the need for transparency

Research is not peer reviewed, nor is it particularly upfront about methodology or assumptions

 Picture: ESA ALEXANDER/SUNDAY TIMES
Picture: ESA ALEXANDER/SUNDAY TIMES

Polling companies always say polls aren’t predictions. But that doesn’t prevent the voting public treating them as if they are — and sometimes acting accordingly. They may just be sample surveys, and flawed ones at that, but election polls have power. They can influence election outcomes.

That is why we need to demand a great deal more transparency and accountability from election pollsters than has been the case until now. We also need a more vigilant voting public, and a more vigilant media, willing to interrogate the results that the pollsters put out, and to understand their limitations.

The polls use survey data. But unlike academic research based on survey data, this research is not peer reviewed, nor is it particularly upfront about methodology or assumptions which can be material to the results.

Much has been made of the links between certain of SA’s election pollsters and political parties, the DA in particular. But it’s not political bias we have to worry about so much as statistical bias.

The latest poll from Ipsos brings the problem to the fore. Ipsos had traditionally been viewed as one of the most reliable of the pollsters in the SA landscape. Its results are based on the largest face-to-face survey of registered voters. In SA, such surveys are inherently less biased, reaching a wider range of individuals and engaging them more thoroughly. And Ipsos has done well in predicting previous election results. Yet even Ipsos this time did not disclose crucial information on what turnout it assumed, or how many voters were undecided and how their responses were treated.

It’s not that we shouldn’t take the results of the various polls seriously as a reflection of the sentiment that prevails at the time of the survey. But if we want to know what exactly the results mean, we need to know the answers to a host of questions. Did they survey all registered voters or those who said they intended to vote? What turnout did they assume? If they modelled different turnout scenarios, how did that affect the results? How many respondents said they were undecided or declined to say who they would vote for — and were those responses allocated to parties based on some or other assumption? In most polls the undecided vote is quite large, and that’s exactly where the swings happen. It would also be useful to know how the questions were framed and how the sample was drawn.

But some of the polls answer almost none of these questions. None answer all of them, at least not for the public as opposed to these firms’ paying clients. And when the polls, in SA and globally, prove to have misled, few hold the pollsters to account for results that could do damage. If the UK polls had more accurately predicted the outcome of the Brexit referendum, would all those apathetic youngsters have been alarmed enough to get to the polls and stop Brexit? Would apathetic US Democrats have been alarmed enough in 2016 to come out and vote against Donald Trump?

Jonny Steinberg has suggested polling companies need to be regulated, given the influence they wield. That’s a last resort and the better outcome would be self-regulation. We should demand they agree to a code of conduct or guidelines requiring them to be transparent about the key variables that shape their results. And as voters, journalists and analysts we should all be more careful about how we use those.

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Comment icon