Weapons of math destruction

When I recently asked a close friend and AI expert what I should read to understand AI and its impacts today, she suggested Cathy O’Neil’s Weapons of Math Destruction (2017). It’s an interesting choice, given its age - but a good one. When O’Neil wrote it in 2016, the stuff that would become AI was still called ‘big data’ and ‘algorithms’, and Cambridge Analytica hadn’t yet made the headlines. The book remains fresh and relevant when it comes to the science that underpins AI; on the social consequences, her words now seem prophetic. In my career, I work with people to define and solve problems with technology, including AI, and I’ve penned a few reflections and ‘watch-outs’ for myself and others.

The main thesis is that ‘data-driven decision-making’ can be epically destructive when implemented badly. O’Neil illustrates this with examples from fields diverse as education, justice, finance, health, sports and political advertising. While the practice of bad statistics is not a new phenomenon, scale and impact achievable today have created something qualitatively new. O’Neil identifies three attributes that combine to create these “WMDs” as follows:

1)        They cause harm: they impact lives and livelihoods by increasing costs, preventing access to certain products and services, or forcing unwanted behaviours.

2)        They are opaque and unaccountable: they cannot be questioned or challenged by the people they impact or the authorities that could protect them.

3)        They scale, impacting entire markets and populations.

Many WMDs get implemented to answer a real need: with rising costs, expensive labour and fierce competition, automation is an attractive proposition. Proponents claim, plausibly, that it leads to cheaper, faster, more consistent and even fairer decisions-making. And it’s true that digitisation can – and does – do all these things, if done properly. But it can also go horribly wrong when processes are designed with lazy assumptions, or rely on biased inputs, or get used in the wrong context, or without enough oversight. And then lives get ruined.

The most interesting thing about this, to me, is that almost exclusively the destructive power of these decisions are not caused by a mathematical error but by flawed human judgment and critical thinking. True, there are errors of data-handling, as in the case of the conflation of identities: if you share a birthday and a surname with a criminal, you may find your credit score being impacted by her behaviour, and worse. But even then, the basic error is a bad assumption (people who share a birthday and have similar names are likely to be the same person) – and a process design that makes it hard to correct the error once it has been made. The error, and the harm, then snowballs: a model for mortgages that associates poverty with higher risk will lead to higher charges, contributing to that same poverty. In a similar way, training a prediction algorithm on biased data will perpetuate the bias. The underlying assumption - that the past is a good predictor of the future - becomes self-reinforcing. The system will work to ensure the future looks like the past, only more so, because you’ve filtered out the variables and randomness that used to allow for social mobility. The poor stay poor, inequality widens. The computer says no.

The problem, then, isn’t the maths, or the technology, but the people building it and selling it: when their motives, assumptions and even core philosophical beliefs get set in code, this impacts society. Again, this is not new, and it can’t be totally prevented: we shape our technologies, and thereafter they shape us.

So what is to be done? How do we build a future that doesn’t look like the bureaucratic authoritarian nightmare in Terry Gilliam’s Brazil? For me, it all comes back to:

0. HAVE CLARITY AND ALIGNMENT ON THE PURPOSE AND OUTCOMES of any system.
This is foundational. If you don’t have the ‘Why’ and the ‘So-what’ nailed, you don’t know what you’re doing. Ideally this should also be aligned with the important bigger question: What world do we want to create, and why? This step usually does not get enough discussion time, and when it is mentioned, it’s at too granular a level which means all the subsequent effort gets pointed at solving the wrong problem (so if you think your objective is to maximise a KPI, make sure you can answer the five whys).

Aside from this, in my opinion, companies and governments need to do three things at a structural organisational level, to reduce the harm potential of digitally-assisted (and AI-assisted) operations:

  1. DESIGN METRICS AND PROCESSES ALIGNED WITH THE PURPOSE
    When designing KPIs and metrics, and when setting up policies, organisations and teams, we must ask: Are we measuring what we think we’re measuring? What could be unforeseen consequences at the individual level – or at a system level – if this succeeds? Can we prevent or compensate for those? How do we validate our hypotheses with proper testing and user research?

  2. REMAIN OPEN
    Bearing in mind that “Not everything that can be measured matters, and not everything that matters can be measured”, we must be open to the fact that our brilliant system is going to be flawed, somewhere. There should always be a side-channel or an appeals process that people can use, separately from the automated process. Feedback and insight from users provides vital insight into where and how processes are failing. In situations of real harm potential (like political advertising), there may be a need for openness and transparency, and maybe some oversight and careful intervention by a regulator.

  3. PREPARE TO EVOLVE
    Design every process to be able to improve over time, informed by (but not dictated by) users’ evolving needs. Again, the purpose and intended outcomes of the process should be the guiding star here - the metrics are only useful if they are accurate, robust and reflective of reality. Otherwise, iterate.

Capitalism is a blind machine that makes more of itself. People are its only eyes, ears and conscience. As the world becomes supercharged by thinking-technologies (“AI”) there will be a continuing need to check our course, to maximise the good and minimise harm.