How to get the most out of opinion polls without being led up the garden path

File 20170605 16853 8cmq9z
Piece of mind.
Arithmedes

Will Jennings, University of Southampton and Ailsa Henderson, University of Edinburgh

After the polling miss at the 2015 general election, many politicians and journalists loudly declared they would never trust polls again. Two years later, opinion polls have regularly been leading the election news. First they foresaw a Conservative landslide, including a resurgence in Scotland, and more recently they’ve pointed to a shock Labour fightback.

A number of factors can confound poll interpretations. Here’s a quick guide to the pitfalls:

1. Demographic problems

Election polls generally aim to be “nationally representative”. They usually collect data from samples of respondents on the rationale that by reflecting the demographics of the general population according to key quotas including sex, age, social class and region, you get a more accurate estimation of the national vote.

Yet while it is often possible to gain some insights on the voting intentions of particular demographic groups, the samples within these quotas are small and not themselves designed to be representative of the demographic in question. Even if they were representative, the “margin of error” on a sample of 100 people is nearly ten points of party support.

This means that a high degree of uncertainty should be attached to the data about particular demographics. To take one example from the Scottish independence referendum campaign in 2014, Lord Ashcroft’s headline-making poll finding that 16 to 17-year-olds were the most pro-independence age group was based on a sample of just 21.

2. Poll differences

With new polls, a lot is often made of small movements in party support. Yet these can often be attributable to sampling error – that is, they are simply random noise. Some variation may also result from “house differences” – methodological choices by a given pollster in how they design their sample and weight respondents. Consequently, some pollsters might be more favourable to Labour and others more favourable to the Conservatives, for example.

The solution is to pay more attention to the average “poll of polls”, which will offset some of the sampling error; and to compare each poll to the last poll from the same firm – while checking whether the pollster has changed their methodology. The performance of that pollster at the last election can equally help assess their record. Even there, however, sampling errors in the final polls can make this difficult.

Fluctuations in the polls can also result from what is called “differential non-response”. This is where voters become more or less willing to respond to surveys when their party is doing well.

3. National polls and local/regional results

The UK election includes 650 constituencies, and we cannot assume that national swings in the polls translate evenly across them. Support for individual candidates, their incumbency or media profiles, as well as leader visits can all influence constituency contests against national trends.

Poll analysts also often try to use these polls to infer electoral dynamics in different parts of the UK. Scottish and Welsh surveys with samples of around 1,000 are conducted less frequently than their national counterparts, for example, so trends are often inferred from Scottish and Welsh sub-samples of British polls.

These sub-samples are substantially smaller – usually fewer than 200 people, sometimes fewer than 100. Not only are they subject to much larger margins of error, they are not independently weighted. And while the UK sample will match the demographic profile sought by the polling company, any sub-sample may not. The demographics of a Scottish sample might look nothing like Scotland itself.

Sub-samples are no more likely to provide insight into regional dynamics as national polls are to provide insight into local dynamics. Hence two Panelbase and two YouGov polls conducted simultaneously in April had the four main parties in Scotland up or down by a total of 14 and 18 points respectively – the first was a Scottish sample of around 1,000 and the second was a sub-sample from a UK poll.

The solution is to use polls for what they were intended: a snapshot of views across the sampled electorate as a whole.

4. A possible alternative

One alternative to over-relying on polls is to look to by-elections or local election results instead, but there are risks too. One issue is timing – a vote cast in a previous year might be different. Another is whether voters make their minds up based on the same factors. In 2007, for example, Scotland voted in the Holyrood and local elections on the same day and the main parties saw differing levels of support.

5. Case for the defence

People may not be aware of many of these limitations with polls, but there has recently developed a conventional wisdom that they can’t be relied upon and that polling errors have got larger over time. This is partly thanks to recent electoral events not widely foreseen by pollsters such as the Conservative over-performance in UK election 2015, Brexit and the election of Donald Trump.

In fact, the pollsters in the 2016 US presidential election were not far off in their estimates of the national popular vote by historical standards. Meanwhile, pollsters performed well in recent elections in Canada (2015) and Australia (2016), and did a good job in the first round of the recent French presidential election (less so the run-off).

As already discussed, polls are subject to both “bias” and “error”, and even a poll of polls doesn’t get rid of all problems since the polls may still be wrong collectively. Yet much of the problem stems from our own over-confidence in the precision of polls. The “headline” figures of any poll can still provide relative insights on the state of party support once you accept the potential for sizeable error in either direction.

In short, pay attention to the polls. They remain a valuable bulwark against the “conventional wisdom” of partisan anecdotes from the doorstep. Just be cautious about the latest individual results and especially finer demographic details, while bearing in mind recurring biases such as the tendency to over-estimate Labour support in elections since 1983.

The ConversationThe best approach is always to collect clues about how the political wind is blowing from multiple sources. And remember that with a little patience, you will know the result by the morning of June 9.

Will Jennings, Professor of Political Science and Public Policy, University of Southampton and Ailsa Henderson, Head of Politics and International Relations, University of Edinburgh

This article was originally published on The Conversation. Read the original article.

image_pdfimage_print
Website | + posts

Founding Editor of The Edinburgh Reporter.
Edinburgh-born multimedia journalist and iPhoneographer.