The problem with behaviour design

The term ‘persuasive technology’ covers a broad set of technologies and techniques combining computer science with behavioural psychology. It’s particularly topical today because of the recent (unsurprising) revelations about Facebook and what it’s been doing with user data. But it’s something I’ve been thinking about for a while, because it’s interesting and has a lot of practical consequences for life in the 21st century.

It’s no surprise that interactive digital technology can be designed to change people’s behaviour. After all, lots of things can change our behaviour, from an attention-grabbing ad campaign to a road design that makes drivers respond differently. But we sometimes forget that design has ethical consequences: a charismatic brand can mask environmental destruction; good road design can save lives.

‘Persuasive technology’ is in the same vein, but in the digital context it represents something new. This is because it’s possible to create digital interfaces that are ‘mass-customised’ to individual users, learning from their past behaviour to target them better. This isn't persuasive in the traditional, rhetorical sense, because it’s not about using language, logic or delivery to change someone’s mind: it’s about bypassing a user’s rational thought process to change their behaviour.

Nudge nudge

In the 1960s, the psychologists Kahneman and Tversky started exploring the ‘predictably irrational’ ways that we humans behave. They reasoned that humans have two cognitive systems – System 1 and System 2 – which work in parallel. System 1 is fast, involuntary and intuitive, whereas System 2 is deliberate and rational. We need a balance of both in order to survive, and the interplay between them is what makes us human (one of the challenges I faced, moving into creative work, involved learning to balance the two systems, move between them, and to muzzle one or other at different times). Through this framework, and by exploring other facets of perception and decision-making, the researchers explained how well-known cognitive biases such as the anchoring effect, or risk aversion, can arise. Walter Mischel used a similar framework, using the terms ‘hot’ and ‘cold’ decision-making to explore willpower, starting with the famous marshmallow test.

Then, in the 80s and 90s, Richard Thaler used these ideas as the foundation for Nudge theory. Nudging is about using cognitive insights to help people make ‘better’ decisions. You don’t have to be gullible for a nudge to work – you just have to be human. Nudges have indeed been shown to be effective at helping people, especially in situations where there is a mismatch between long- and short-term interests, like saving for pensions or helping someone lose weight. 

Speeding nudge: the emoji on these signs work on our desire for social approval.

Speeding nudge: the emoji on these signs work on our desire for social approval.

Nudges can also be used to exploit people against their own interests – as in the so-called ‘Dark Patterns’ of digital user experience design. Again, this isn’t a question of having poor judgment or being credulous: it's something that, for the most part, bypasses reason. We are all susceptible to this because our human brains have to use heuristics – shortcuts to simplify decisions – and for the most part, we don’t even realise we’re doing it. This means that if someone wants to exploit us by designing their product or website to bias users towards certain outcomes, they can. Moreover, if they are able to analyse some data on your past behaviour, the company might be able to predict what kind of nudges you’d be likely to respond to, and adapt the site instantly. We, the users, usually have no idea this is happening. (For more on psychological profiling see Apply Magic Sauce, by Cambridge University)

 

Exploiting internal tensions

BJ Fogg, the Stanford academic, popularised the phrase ‘persuasive technology’ from the 1990s. His simple ‘Fogg Behaviour Model’ three conditions for a user’s behaviour to be shunted towards a particular path:

  1. The user wants to do the thing,

  2. They have the ability to do it, and

  3. They have been prompted to do it (‘triggered’).

The design and timing of the trigger is especially important, since it works best if it speaks to the instinctive/involuntary/‘hot’ system.

If, for example, someone wants to eat more healthily (step 1), but doesn’t because it’s too much effort, they could make healthy food more convenient, for example by stocking up on vegetables (step 2), and litter their kitchen with ‘triggers’ to inspire healthy eating just at the moments when they’re thinking about food – perhaps some pictures of healthy food or fit people stuck to the fridge (step 3).

The simplicity of the Fogg model is part of its power. The ethical problems come into focus when we unpack it a bit, and ask who is changing whose behaviour, and why.

Looking at step 1, there are a lot of other things that we might ‘want’ to do on some basic level, but which we know we’d be better off if we didn’t. For example: if I have a pub lunch on a weekday, it might be nice to have a beer but I know it’ll make me sluggish in the afternoon, so I don’t. But a combination of factors might convince me otherwise – from peer pressure to nice weather. In this case, the ‘hot triggers’ will have gotten me to change my behaviour. This example is innocuous but lots of things fall into this category: drinking, gambling, overeating – and addiction to social media.

Credit: Centre for Humane Technology (http://humanetech.com/app-ratings/

Credit: Centre for Humane Technology (http://humanetech.com/app-ratings/

Behaviour-change models have provided us with a set of tools, nothing more. But an internet funded by advertising seems to lead inexorably to business models based on behavioural exploitation. The consequences of this include addiction, social anxiety, poor mental health, and more. Grindr makes its users miserable (but they still use it). Is this just the price we must pay for a functioning internet? Zuckerberg’s comment that "there will always be a free version of Facebook" suggests he believes data exploitation is here to stay - at least for most people.

How could we design business models for digital tech that are aligned with individual and collective wellbeing?

These questions are likely to become more pressing as digital interactions are built into the physical fabric of our world, and digital interactions are increasingly manifested in the physical world – so-called ‘ubiquitous computing’ and ‘ambient intelligence’. The Facebooks and Googles of the world will be keen to track our behaviour throughout, to improve their targeting algorithms and throw more effective ‘hot triggers’ at us, wherever our attention may be directed at the time. (But tellingly, Facebook execs ‘don't get high on their own supply’.)

What next?

In a world of ‘smart’ nudges, do free will and personal responsibility still mean the same things? Probably not. 

As BJ Fogg himself says: "you should design your life to minimise your reliance on willpower." This includes everything from turning off phone notifications, to choosing products and services that help you achieve what you want with a minimum of exploitation.

My position is ambivalent with regards nudging / persuasive technology - I see it as a designable layer of experience, another material for designers to work with. And just as with materials selection, we need to be aware of our ethical responsibilities. The design theorist P Verbeek suggests that, as users, we should educate ourselves better about how we are influenced. But in practice the time and headspace this would require is a luxury that many people can’t afford. I have less of a problem with the tools themselves, than with the intention behind them.

It’s interesting to consider how organisations and projects could move in a more humane direction, and what a humane internet might look like. The Mozilla Foundation, the EFF, the Centre for Humane Technology, and the customer commons initiative are pushing for positive change in different ways. My own 2017 project, Treasure, took an experimental approach to create practical interventions that help users identify and serve their longer-term financial interests and thereby resist short-term exploitation. 

Digital space has become an exploitative 'wild west', like a city without zoning laws, and I agree with Tristan Harris of the CHT that more should be done to protect us from being manipulated unfairly. This might include regulation, for example with enforced transparency - but this space needs a culture change more than check-boxes. A ‘humane’ approach to digital technology requires a transformational rethink of business models, encompassing branding and product design. This is an exciting creative challenge, and a huge opportunity.