Jamie Bartlett

Why politicians love to blame an algorithm

Why politicians love to blame an algorithm
Text settings
Comments

Jeremy Hunt as Home Secretary said something very important by mistake. He told the Commons in May 2018 that ‘a computer algorithm failure’ meant 450,000 patients in England missed breast cancer screenings. As many as 270 women might have had their lives shortened as a result. This point hasn't received the analysis it deserves. Scores of women died sooner than they should have done, because of an algorithm.

He’s probably right, you know. It really could have been a computer model to blame here. But that’s obviously unsatisfactory, since we need humans to hold to account when things go wrong. Let’s say it was a poorly programmed algorithm – who’s at fault? The tech guy who wrote it, years ago? The person who commissioned it? The people feeding the data in?

The more we outsource decisions to machines, the more blurred the lines of accountability will become. ‘It was the algorithm’ will, I expect, become a common reprise in the years ahead. Cathy O’Neil, in her excellent book Weapons of Maths Destruction, has documented dozens of instances where important decisions – relating to hiring policy, teacher evaluations, police officer distribution and more – are effectively outsourced to the cold and unquestionable efficiency of proprietary data and algorithms, even though these decisions have important moral dimensions and consequences. And we have no real way of knowing how they’re making those decisions.

As more decisions are taken by machines – which they will be – no doubt lives will be saved too. If a machine diagnosis were repeatedly better than a human doctor, it would potentially be unethical to ignore the machine’s advice. A government with a machine telling them a certain policing allocation would save money and cut crime would be hard to resist.

But sometimes things will go wrong. They might look and sounds very objective but algorithms always start with a question framed by a whoever is in charge. As a result they tend to reproduce the biases of their creators. They are never neutral. For example, some police forces rely on data models to decide where to put police officers. However, crime tends to take place in poor neighbourhoods, which means more cops in those areas. That generally means more people in those neighbourhoods getting arrested, which feeds back into the model, creating a self-perpetuating loop of growing inequality and algorithm-driven injustice.

It’s not going to be good enough to say the computer did it. In my book The People vs Tech I suggested that we’ll need new ways of making sure these algorithms are accountable. Our lawmakers – whether national or international – must create accountability officials who, like IRS or Ofsted inspectors, have the right to send in technicians with the requisite skills to examine algorithms, either as random spot-checks or in relation to a specific complaint. While it may no longer be easy to look under the bonnet of modern algorithms, careful examination and oversight is still possible. Now we just need people in government smart and technical enough to do it.