Philip Thomas

How did Sage get it so wrong?

How did Sage get it so wrong?
Text settings
Comments

Professor Neil Ferguson struck an unusually optimistic tone this week. With just one Covid death reported on Monday, and infection levels at an eight-month low in the UK, the architect of the original lockdown said: ‘The data is very encouraging and very much in line with what we expected.’ The first half of that statement is certainly true; the second half much less so.

As an unofficial member of the government’s Scientific Advisory Group for Emergencies (he resigned in an official capacity after breaking lockdown rules) Prof Ferguson has been responsible for much of the pessimistic modelling during the pandemic. For example, his team at Imperial College were predicting as late as 30 March that only 45 per cent of the population would be protected against severe disease by 21 June, even under ‘optimistic’ assumptions. In fact, hard evidence based on the Office of National Statistics measurement shows that 68 per cent of the population already had antibodies against Covid-19 by 7 April, which meant that either they had received at least one vaccination or they had recovered from Covid-19 (or indeed both). Whatever the case, they would certainly have a fair degree of immunity, and thus be protected from serious illness.

The growth in antibodies in England's population could be predicted using an uncomplicated computer model, so how did Imperial get it so wrong? The PCCF model I developed at the University of Bristol was able to match to within a percentage point the ONS figure on 7 April - and suggests that 74 per cent of the adult population had antibodies by 2 May. As you can see in the below graph, the PCCF antibodies predictor in red closely tracked the ONS upper and lower antibody ranges from last summer onwards. The chart shows that three quarters of England’s total population, including children, now have protection and, as a minimum, won’t suffer serious illness.

This figure is, of course, rising every day as more people are vaccinated - a process still proceeding apace.

Although Ferguson’s team at Imperial College has been notable for its pessimistic predictions, it has not been alone. One modelling study from the London School of Hygiene and Tropical Medicine, which also contributed to Sage's interim roadmap assessment in advance of the next step out of lockdown, assumed that two doses of the Oxford-AstraZeneca vaccine would only give 31 per cent protection against infection and one dose just 72 per cent against dying from the disease. These are bewilderingly low figures, not only because of the much more encouraging field trial evidence that had been around for months, but also because a large-scale study on the ground found that a single dose of the AZ vaccine was 94 per cent effective in preventing hospitalisation in Scotland.

The R-number: Sage’s abysmal track record

A key component that has contributed to Sage's doomsday outlook is the R-number (the number of people the average person with Covid will pass the virus onto. Below one and the epidemic shrinks; above one and it grows). The R-number estimate released by Sage is decided through online debate amongst a group of academics from 11 institutions, each of whom argues for their particular figure. Eventually some sort of judgement call is made, and agreed upper and lower values are made public each week. This process, pursued throughout the pandemic, is not scientific and has produced answers of dubious worth.

There are two main problems with Sage’s working. First, their estimates are 18 days out of date when they arrive. Second, even then, they are inaccurate. The ONS figures, seen as the gold standard, are in red below: pretty far from the Sage range of estimates (in blue).

It's a terrible shame that, for the last year, the government has not been guided by the ONS-based estimate of the R-number. Besides being fully scientific, ONS-based estimates also reduce the measurement delay, since they are only nine days in arrears when they arrive, as opposed to being 18 days out of date, like Sage's numbers. How different things could have been if the ONS data had been more influential on the government’s thinking. This would have allowed people greater freedom and minimised disruption to the economy while still keeping the NHS well protected.

Fundamentally, Prof Ferguson and his team - and the other Sage modellers - have overcomplicated their modelling, which is inappropriate when the data we have on the virus is very limited, as it always will be with any new disease. The additional complexity of the Sage models might be academically satisfying and might, indeed, seem impressive to politicians. But it has not brought greater accuracy. At Bristol University, the PCCF model has also been used to forecast the R-number - and has come far closer to the ONS.

In 2002 in the aftermath of the mad cow disease fiasco (which Prof Ferguson and others suggested could kill millions). I provided evidence to parliament alongside my colleague Professor Martin Newby. We had been sceptical of the dire figures bandied around by Imperial, which led to the slaughter of 4.4 million cattle and the adoption of pointless countermeasures costing billions of pounds after the risk to humans, always very small, had become negligible. Our straight-forward modelling showed, correctly, that variant-Creutzfeldt-Jakob disease (the human form) would result in a few hundred deaths at most. As part of our evidence, we said:

The government's continued inability to give proper consideration to the spectrum of scientific opinion has been very expensive and must be a cause for major concern. It is clear that those tasked with devising policy – ministers and civil servants – need to adopt a more critical attitude to the scientific advice they are offered, even when that advice comes from one of their advisory bodies.

It is a shame that 18 years on this recommendation has not been heeded. Focusing on the advice of a handful of particular scientists, who all broadly agreed with one another, has meant that the PM has received inaccurate estimates of the R-number throughout the pandemic, despite it being an easy figure to measure. As we move forward, having defeated this pandemic, it is vital that lessons are learnt for next time.

Philip Thomas is professor of risk management at the University of Bristol

Written byPhilip Thomas

Philip Thomas is visiting academic professor at Bristol University.

Comments
Topics in this articleSociety