Weapons of math destruction

When I recently asked a close friend and AI expert what I should read to understand AI and its impacts today, she suggested Cathy O’Neil’s Weapons of Math Destruction (2017). It’s an interesting choice, given its age - but a good one. When O’Neil wrote it in 2016, the stuff that would become AI was still called ‘big data’ and ‘algorithms’, and Cambridge Analytica hadn’t yet made the headlines. The book remains fresh and relevant when it comes to the science that underpins AI; on the social consequences, her words now seem prophetic. In my career, I work with people to define and solve problems with technology, including AI, and I’ve penned a few reflections and ‘watch-outs’ for myself and others.

The main thesis is that ‘data-driven decision-making’ can be epically destructive when implemented badly. O’Neil illustrates this with examples from fields diverse as education, justice, finance, health, sports and political advertising. While the practice of bad statistics is not a new phenomenon, scale and impact achievable today have created something qualitatively new. O’Neil identifies three attributes that combine to create these “WMDs” as follows:

1)        They cause harm: they impact lives and livelihoods by increasing costs, preventing access to certain products and services, or forcing unwanted behaviours.

2)        They are opaque and unaccountable: they cannot be questioned or challenged by the people they impact or the authorities that could protect them.

3)        They scale, impacting entire markets and populations.

Many WMDs get implemented to answer a real need: with rising costs, expensive labour and fierce competition, automation is an attractive proposition. Proponents claim, plausibly, that it leads to cheaper, faster, more consistent and even fairer decisions-making. And it’s true that digitisation can – and does – do all these things, if done properly. But it can also go horribly wrong when processes are designed with lazy assumptions, or rely on biased inputs, or get used in the wrong context, or without enough oversight. And then lives get ruined.

The most interesting thing about this, to me, is that almost exclusively the destructive power of these decisions are not caused by a mathematical error but by flawed human judgment and critical thinking. True, there are errors of data-handling, as in the case of the conflation of identities: if you share a birthday and a surname with a criminal, you may find your credit score being impacted by her behaviour, and worse. But even then, the basic error is a bad assumption (people who share a birthday and have similar names are likely to be the same person) – and a process design that makes it hard to correct the error once it has been made. The error, and the harm, then snowballs: a model for mortgages that associates poverty with higher risk will lead to higher charges, contributing to that same poverty. In a similar way, training a prediction algorithm on biased data will perpetuate the bias. The underlying assumption - that the past is a good predictor of the future - becomes self-reinforcing. The system will work to ensure the future looks like the past, only more so, because you’ve filtered out the variables and randomness that used to allow for social mobility. The poor stay poor, inequality widens. The computer says no.

The problem, then, isn’t the maths, or the technology, but the people building it and selling it: when their motives, assumptions and even core philosophical beliefs get set in code, this impacts society. Again, this is not new, and it can’t be totally prevented: we shape our technologies, and thereafter they shape us.

So what is to be done? How do we build a future that doesn’t look like the bureaucratic authoritarian nightmare in Terry Gilliam’s Brazil? For me, it all comes back to:

0. HAVE CLARITY AND ALIGNMENT ON THE PURPOSE AND OUTCOMES of any system.
This is foundational. If you don’t have the ‘Why’ and the ‘So-what’ nailed, you don’t know what you’re doing. Ideally this should also be aligned with the important bigger question: What world do we want to create, and why? This step usually does not get enough discussion time, and when it is mentioned, it’s at too granular a level which means all the subsequent effort gets pointed at solving the wrong problem (so if you think your objective is to maximise a KPI, make sure you can answer the five whys).

Aside from this, in my opinion, companies and governments need to do three things at a structural organisational level, to reduce the harm potential of digitally-assisted (and AI-assisted) operations:

  1. DESIGN METRICS AND PROCESSES ALIGNED WITH THE PURPOSE
    When designing KPIs and metrics, and when setting up policies, organisations and teams, we must ask: Are we measuring what we think we’re measuring? What could be unforeseen consequences at the individual level – or at a system level – if this succeeds? Can we prevent or compensate for those? How do we validate our hypotheses with proper testing and user research?

  2. REMAIN OPEN
    Bearing in mind that “Not everything that can be measured matters, and not everything that matters can be measured”, we must be open to the fact that our brilliant system is going to be flawed, somewhere. There should always be a side-channel or an appeals process that people can use, separately from the automated process. Feedback and insight from users provides vital insight into where and how processes are failing. In situations of real harm potential (like political advertising), there may be a need for openness and transparency, and maybe some oversight and careful intervention by a regulator.

  3. PREPARE TO EVOLVE
    Design every process to be able to improve over time, informed by (but not dictated by) users’ evolving needs. Again, the purpose and intended outcomes of the process should be the guiding star here - the metrics are only useful if they are accurate, robust and reflective of reality. Otherwise, iterate.

Capitalism is a blind machine that makes more of itself. People are its only eyes, ears and conscience. As the world becomes supercharged by thinking-technologies (“AI”) there will be a continuing need to check our course, to maximise the good and minimise harm.

Uncommon carriers: should social media be regulated?

Over the past year, conservatives in the US have been fighting an ‘anti-censorship battle’ with social media platforms, as a reaction to posts and accounts being taken down for expressing hardline conservative views that violate community standards. New laws in Florida and Texas have made headlines, and the argument has now reached the Supreme Court.

The legal challenge seems to hinge on a category definition that feels very quaint to this Millennial: “are social media platforms like Twitter and Facebook more like a newspaper or a telephone network?” Currently, social media enforcement of community standards (and their ability to take down posts and de-platform certain ex-Presidents) is analogous to the editorial function of a newspaper: the New York Times is free to express a viewpoint, and isn’t obliged to publish anything it doesn’t agree with. Verizon, on the other hand, is regulated as a ‘common carrier’ and is required to transmit a telephone call regardless of its political leaning.

Republicans’ argument that large social media companies are common carriers will probably be found to be invalid in the Supreme Court. But it’s not totally outlandish:

  • Like much ‘common carrier’ infrastructure, social media experiences network effects: the value of a social media platform increases with the number of connected users. This can lead to monopolistic ‘winner-takes-all’ situations, which it might be appropriate to regulate.

  • Unlike a newspaper, most of the content on social media is posted for free, by the audience itself.

At first glance, then, Twitter is simply a digital abstraction of a town square: what could be more neutral and noble than holding space for free exchange of ideas?

The fact is, of course, that while Twitter and Facebook have elements of a pure network platform and a pure publisher, they are categorically different from both. The media industry spent a painful 15 years adapting to this reality, but lawmakers and regulators, it seems, are still in 2008.

One thing we are still only beginning to grapple with is the fact that social media platforms, as mediators of our relationships - and thus our very identities - can have real-world impacts at an individual and societal level. For me this comes into sharpest focus when you consider how the platforms are monetized, and how the technology combines with that revenue model to produce new emergent behaviours. Here is a very high-level summary:

  • Social platforms use an algorithm to prioritise content for users

The sheer amount of stuff posted on these platforms (even by just your friends) is generally more than you can ever read. Displaying posts in order of when they were posted is actually not a good user experience. To make the platform something you actually want to interact with, the posts you see, and the order you see them, are managed by an automated process. This algorithm learns over time by tracking your behaviour on the platform, and gets better at putting things in front of you that produce whatever result it’s been told to achieve.

  • The company makes money from advertising.

Ad revenue enables the platform to be free to users. With low barriers to adoption, you can get large user numbers quickly, and access those all-important network effects. As a user you’ll see a few ads and paid posts as your ‘price’ of entry, and some of them might actually be relevant to you. But here’s the thing: the algorithm will learn your behaviours and preferences, and prioritise posts you find ‘engaging’, to increase your time on the site, and thus ad revenue.

  • Outrage is good for business

As any Twitter user knows, taking the middle ground doesn’t get much engagement (unless you’re funny or famous). If you want likes and shares, you need to be opinionated and pithy - but this can easily turn into ‘extreme and reductive’. On Facebook, community group chats are more hidden from each other, but the feed is still subject to these addicting and polarising forces. The pursuit of advertising revenue on fast-moving platforms is a huge accelerating factor for political polarisation.

  • If you’re not paying, you are the product (and so is your demographic)

Once the algorithm has you pegged as a believer in flat-earth theory (or a Hilary skeptic), you can be targeted by ads with the potential to further manipulate your behaviour. We saw this in 2016, where targeted political advertising likely affected the outcome of the Brexit and Trump campaigns by a few percentage points. (There is also the much harder-to-quantify impact of bots and paid actors, which definitely also played a role.)

For the curious, I’d recommend The System by James Ball as a primer on the above.

So while advertising has enabled social media to grow fast, it has also created incentives to distort our interactions and relationships with others.

At the level of the individual, we see widespread social media addiction and radicalisation, leading to vicious culture wars and real-life tragedy. People now spend an average of 2hr30 daily on some form of social media (and tracking cookies enable targeted ads for as long as we’re online) and an increasing percentage of people get their news and form their views from social media. Social media algorithms curate our sense of reality, and can distort it.

At the societal level, bad actors, bots and targeted political advertising threaten democracy itself, and thus national security. (Things are also going to start moving faster, and will get weirder, as both the targeting and the content of ads can now be heavily automated.)

We can’t put the genie back in the bottle, nor would we want to. These platforms are popular in part because they are valuable. They are a part of our society and something like them will continue to exist for a long time. Increasingly, they will incorporate other everyday functions like payments, and there is the potential for some to become so socially valuable that they start to look like essential citizen services.

Beyond the current political scuffles, then, there is a strong case for smart regulation of social media platforms as businesses. Is it enough to incentivise transparency? What metrics could show that social media is doing more good to society than harm?*

*ps. There’s a deeper question still, beyond that, which is still science fiction but worth considering. We know that an algorithm that curates our social and informational environment can influence us towards certain behaviours. Over time this can perhaps lead us to develop certain attributes or qualities, such as being more curious or caring. Such an algorithm would have a power somewhere between a politician, a parent and a god. In whose hands should this power reside?

Neurospicy

It’s only the last few years that I’ve actually realised my brain works a little differently from other people’s; more recently still that I’ve had the language to explain how. For as long as I can remember, I’ve felt on a different wavelength to many other people. I’ve struggled with certain things (forgetfulness, selective attention, impatience, speaking out of turn) but assumed that basically everyone did this, and maybe I was just poorly socialised in my childhood or something. As I grew up, I evolved workarounds to maintain a façade of acceptable behaviour (I monitor myself in conversation to make sure I don’t interrupt or go on tangents; in a pub, I position myself so I don’t get distracted by a TV; in a work meeting, I make notes and doodle to keep my brain engaged on what I’m listening to…). Some of these are so deeply ingrained that I don’t always realise I’m doing them. Some of them seem strange and idiosyncratic to others. All of them take some effort, and can falter when I get tired, distracted or overwhelmed.

A few years ago, when a friend mentioned his own ADHD diagnosis, I started to join some dots. I started learning about neurodiversity. I was referred for an assessment by my GP, and, some months later, was formally diagnosed with ADHD.

While it made a lot of sense, and was very validating, I see the diagnosis as a ‘new chapter’ rather than full closure. It has helped me reframe a lot of my idiosyncratic behaviours as coping mechanisms. Rather than compensating for a flaw, I’m adjusting for a difference. I can’t emphasise enough how profound a shift this is. There are three parts to this, happening in tandem:

1.        Self-management: As inferred above, I grew up holding my behaviour on a tight leash, which is common for people not diagnosed until adulthood. The pressure to ‘pass’ as neurotypical can lead to anxiety and the extra effort can be exhausting. While it’s probably not desirable or even possible to drop the mask entirely – even the most ‘liberal’ work and social environments only really tolerate quite a narrow band of acceptable diversity – self-knowledge is instructive, and helps me conserve energy and stop relying on anxiety as a motivator.  

2.        Acceptance: I used to be very judgmental of any failures to act ‘normally’ – both for myself and others. If I ever slipped up and did something ADHD-ish, like losing my keys, or missing an appointment because I got wrapped up in something, I’d hate myself for it and spend ages agonising afterwards. This certainly still happens a bit, but I’m much more understanding and compassionate to myself. It also gives me confidence to perform on my own terms, and to stop looking to others for social and behavioural cues of what ‘normal’ should look like.

3.        Growth: If I’m less motivated by a fear of exposing weakness, I can more easily recognise and build strengths. I have a clearer idea of what I need, and I can be more honest about when I need help. If I do need support, it’s much easier to advocate for myself and my needs when armed with more accurate language – and, if I need it – a piece of paper from a psychiatrist.

There are strong parallels here with LGBT self-acceptance. Masking and being in the closet are similarly constricting experiences, and joining a proud and visible community is a decisive move towards freedom and empowerment. Working out how to pursue a life of greater fun and meaning in an authentic way is not by any means a linear process, but it feels clearer now than it did four years ago!

Why you should read more fantasy

Last weekend, at the British Library’s exhibition Fantasy: Realms of Imagination, I was struck by the diversity of fantasy as a genre of storytelling and its longstanding importance within our culture. Fantasy (and its close cousin, science fiction) are having a heyday after decades of being regarded as ‘too popular to be serious’. There is much fantasy that is pure entertainment, to be sure, but few would now seriously dismiss it as childish or irrelevant.

There are many stories that hold a place in our culture because of the truth they communicate rather than because they are true. To quote Victor LaValle, fantasy is ‘a chance to talk about something in our real world behind the screen of something impossible’.

Escapism and entertainment have value, but for fantasy these things are usually just the start. All stories also teach, they reflect, and they persuade. An audience does not consume a story passively; storytelling is a two-sided encounter. It’s the closest thing we have to mind control. A fantasy world goes further; it has us suspend our disbelief and empathise with characters in a world with rules different to our own. In this state, our minds are open to a range of possibilities and opinions to which it might otherwise be closed. Writers, of course, know this, and deploy satire and allegory with varying levels of openness and self-awareness: Gulliver’s Travels, the Wizard of Oz and His Dark Materials come to mind. Even when an author resists intentional allegory, their values and experiences are refracted through the work, as with Tolkein’s Lord of the Rings. And even fantasy stories can spread wrong knowledge: how many children, watching Disney’s the Little Mermaid, grew up to believe that an unfair contract cannot be broken once it is signed? (I recommend Malcom Gladwell’s great podcast on this last point.)

Fantasy and science fiction also prompt us to consider alternative paths, as a challenge, an aspiration, or a warning. It helps us explore what ‘good’ looks like in society, and suggests what might happen if we put our minds to one goal or another. Sci-fi and fantasy stories contribute to cultural and technological shifts: they can act as touchstones for communicating new ideas, inspire invention, make radical ideas commonplace, and shape what we find cool and valuable. In Star Trek, for example, the core idea of an empathetic society, driven by curiosity and collaboration rather than greed, remains relevant and inspiring, even if the earlier series now feel dated. Snow Crash, Neal Stephenson’s 1992 ‘metaverse’ thriller, was written as a satirical dystopia but today is reportedly viewed as aspirational by our Silicon Valley overlords. Meanwhile, The Handmaid’s Tale seems to have been interpreted as policy ambition by the US Republican party.

I see a strong correlation between creativity and reading fantasy (and I’m not alone) and the connection is certainly important for me personally. We live in a world undergoing rapid change, that requires us to be imaginative and open to possibility. Today my work involves helping organisations transform to meet the needs of the future, for which the first step is always to agree (together) on what you think the future looks like. We find that often, to a great extent, we can decide what the future looks like; as Peter Drucker said, “the best way to predict the future is to design it”. A clear vision, once it exists, unlocks everything else. It’s a well-crafted fiction – a fantasy, if you will – that gets woven into reality, one thread at a time.

Balance the game, balance the books

I’ve become fascinated by game monetisation models. (OK bear with me, it gets interesting) It’s a weird and diverse world, which encompasses UX, behaviour, storytelling, business design and commercial pragmatism. In digital we often talk about meeting user needs. But how you frame and design the process of asking your user for money can make or break a product, or an entire brand.

Back in the early days of the internet, if you were in the business of software, there were basically only four ways to make money from your users. Some software worked on a freemium, subscription or ad-funded basis, or some hybrid thereof - but games were almost exclusively sold as a simple one-off transaction: pay once for access to the full game, forever.

As the internet developed, and especially when smartphones entered the picture, games and gamers became more diverse. Today you can hop onto FIFA or League of Legends and play someone new whenever you like; you can build a character and interact with others in World of Warcraft; or you can play Candy Crush on your morning commute.

It stands to reason that a diverse gaming industry, nudging £250bn/yr, has a similarly diverse range of revenue models. From my (non-exhaustive, highly anecdotal) analysis everything fits somewhere in onee of two categories: either pay to play, in which a subscription or a one-off fee gives access to the game, or free to play, which have no paywall, but ‘in-game payments’ can be made for a variety of purposes. Sometimes these microtransactions involve a trade-off of time, attention or skill in return for the player’s cash. Sometimes they’re purely cosmetic and kudos-based. Sometimes it’s all but impossible to advance in the game without paying – aka. ‘pay to win’. FtP games also rarely have limits on how much a player can spend, and in some ways that’s the point: some players end up spending far more money on FtP games than even the most expensive PtP titles - sometimes in the tens of thousands.

Now, here’s the thing: I don’t think either of these models is inherently better or worse. PtP is simple and straightforward, but sometimes you just don’t play a game much after buying it, so you might feel you’ve wasted your money. FtP has a low bar for entry, so it’s great for new or occasional players, or, say, multiplayer games whose experience improves if there are a lot of people online at once. FtP games have entertained millions and made billions of dollars. Many fans are happy with the idea of microtransactions as an exchange for entertainment.

Developers deserve to be paid for their time and skill, of course. But some monetisation tactics are plainly predatory. A prompt might pop up at a critical moment: “Your hero died in that battle? Quick, here’s a power-up to revive them and win the fight! Thirty gems: ten seconds to buy!!” In behavioural science parlance, tactics like this exploit moments of emotional arousal within the game, when a player is more likely to make a rash decision. Sunk-cost bias also applies here, as the player has likely invested considerable time, and possibly money, to get to this point. The item you buy might also have a ‘loot box’ or gacha mechanic: a chance of an in-game benefit. To sidestep gambling regulation, you buy these only with in-game currency, which you can obtain either with cash or through gameplay, thus blurring the line between a game of pure chance and one of skill. In my view, any product that flirts with the line between ‘exploitative’ and ‘lucrative’ probably needs a rethink. Any monetisation mechanic that feels jarring in the context of the game, is probably the wrong mechanic. Playtesting should reveal this - so it’s surprising that games such as Star Wars Battlefront 2 and Mass Effect 3 got released with such egregious monetisation features.

The worst problems seem to emerge in organisations that, somewhere along the line, lose the thread of empathy with players. If game design and revenue generation are treated as totally separate functions, this is a bad sign. Use of language is also indicative: I’ve seen department goals and KPIs that objectify players: ‘how do we monetize this segment of non-paying players’ rather than ‘how do we create an experience more players are willing to pay for’. Games, after all, should be primarily fun.

 No doubt the innovation in monetisation models will continue through 2024, and I wonder where it will take us, especially as game audiences continue to grow.

For what it’s worth, here’s my rough attempt at a set of design principles for ethical monetisation:

  • Honesty – First and foremost, a game should be candid with its players about what they get in exchange for their money, time and/or skill, so the transaction is clear and comprehensible. Set players’ expectations ahead of time: if success or advancement relies on paying, make it clear upfront - don’t hide this fact until the player becomes invested. A loot box (paid-for game of chance) should be designed to be fun in itself, just like how people can enjoy slot machines, but the design of the game should allow for this by framing it appropriately.

  • Fairness – The trade-off between money, time and skill should be reasonable. If the game is a multiplayer, game dynamics between paying and non-paying players need to preserve a sense of fair play and balance.

  • Respect – Games should respect players’ money and time by providing value for both. The timing and UX pattern for a payment should not exploit a player’s psychology. Players should be treated as an audience to be entertained, not as a cash-cow to be milked.

There are ways to avoid or offset unfair and exploitative dynamics, and many free-to-play games do strike a successful balance, either by allowing players to go into them open-eyed, or by using pity systems and paid cosmetic upgrades that don’t affect gameplay. (I’m told Genshin Impact and the Honkai series, by miHoYO, are good examples). There’s a place in the world for ‘free to play’ – though, traditionalist that I am, I much prefer the clean simplicity of a purchase or subscription, which sets up clear boundaries and expectations on both sides of a relationship.

I want to be bored

I tend to be interested in lots of things (read: I’m easily distracted) and I think of ‘interesting’ as a positive word, implying novel, varied, creative and perhaps unexpected or challenging.

But there’s definitely a negative kind of ‘interesting’. I’m reminded of the apocryphal Chinese curse ‘may you live in interesting times’: interesting can mean drama, misdirection, pointless complexity, and change for the sake of change. Being simply interesting doesn’t imply that something is important, worthwhile or benign. For me, ‘interesting’ has become one of those utterances, like ‘engagement’, ‘productive’, and ‘hardworking families’, that raise an eyebrow of skepticism.

We are taught to fear boredom and its consequences - usually subtly, through inference - by our media, entertainment companies and, yes, even our employers. Boredom is pathologised (health websites list its as a condition linked to depression): Quick! Distract yourself with gossip, drama, other people’s problems, busywork, and grind. The devil makes work for idle hands! Our attention is a scarce resource, so don’t hog it for yourself.

What this rather flat view fails to account for is that boredom comes in many shades, some of which can feel negative in the moment, but all of which are instructive. We have, for example:

  • ‘Understimulated’ bored: when there isn’t enough to do, but social convention or happenstance prevents you from doing anything about it

  • ‘Confused and overwhelmed’ bored, which is familiar to anyone who had a sub-par science teacher

  • ‘Burned-out and numb’ bored: a cousin of ‘Gen-z ennui’, this is caused by overstimulation and manifests as an impotence of spirit and lack of strong will in any particular direction (i.e. city professional hive worker)

  • ‘Avoiding a task’ bored: the big force-field that appears just when I have something important to do that isn’t novel, challenging or urgent, and magically deflects me to another task (like writing a blog)

  • ‘Indifferent’ bored: this is actually something most public services should aspire to induce - from taking a bus, to having a vaccine, there are many areas where invisibility is a sign of excellence, and ‘interesting’ indicates a failure

  • ‘Peacefully empty’ bored: the emptiness that fills the imagination - this is what I think of as ‘good bored’, being appropriately stimulated yet unencumbered by pressures, anxieties or distractions. Today, companies sell it back to us and call it mindfulness. This mental state is why people fly thousands of miles to remote beaches to be bored in the right way.

The right kind of boredom is good for our health and is a necessary condition for much creativity. The big open secret is that this is all accessible to all of us, for free - we just have to resist the lure of ‘interesting’ from time to time. (Step two is to listen to our own minds, which is non-trivial and nonlinear, but one of the most fruitful things anyone can do.)

Here’s to embracing the countercultural and creative potential of boredom.

2023 and me (some honest reflections)

After the new year platitudes, now for some real-talk.

2023 was, as I have mentioned previously, mostly a good year. It was also a year of working very hard, mostly on a single, very intense, project. The team was fantastic - indeed, it was mostly the social environment that allowed me to sustain the pace and level of effort for so long. Despite many challenges, setbacks and switchbacks, which at times left us bewildered and frustrated, there was a sense that we were in it together. And we held it together, and we delivered. The project was a fantastic experience of the ‘Scrum’ method, in that we were a cross-functional team adapting rapidly to changing circumstances. It was all very Agile.

And yet an important element of Scrum is that the pace should be sustainable. Unfortunately, in our context, our pace was set by the hardest of hard deadlines, and we didn’t have the luxury of varying the scope. So while I was gaining all this great experience, I was getting tunnel vision. I felt ‘resilient’ in the narrow sense of a wind-up toy: if you stop it, it’ll immediately set off at top speed again…

In November, after the best part of a year of running, my role in that project concluded — so of course I went straight into another, even more intense one. And I mean straight in - not a day in between to take a breath or reflect on my achievements. I must admit, I was lured by the prestige of the new role and flattered to be asked. But there was more: after running flat-out for so long, it had become the only way I knew how to work. I’d gotten addicted to firefighting and urgency and crisis.

By the time I made it to Christmas, of course, I was exhausted and tired and stressed. I felt like I’d misplaced my sense of purpose - that I was just pretending to want things. Fortunately, when I stepped away from my desk I had a festive script to follow: I cooked elaborate meals, went for wintery walks, read books by the fire, and I ate and drank lots. It took me at least ten days away from work to actually calm down and relax properly, and only then did I start recharging.

So now I feel able to look back on 2023 and drag some lessons out of it. Professionally, for me, it was a successful year. The wider team on my project won a big internal award, and I personally got a ton of glowing feedback. On the other hand, over the year, I’d allowed sustained stress and fatigue to impact my mental wellbeing and my friendships.

I’m now back at work (albeit still not quite at full strength, thanks to a lingering cough). Reflecting on the above, some lessons I can extract for myself might be:

  • Take more regular days off, especially after I finish an intense project.

  • Find the middle ground between ‘off’ and ‘on’. Sometimes this involves stepping back even if an output is imperfect (so hard to do, but a useful skill for survival)

…and I’m sure I’ll add to the list over the coming weeks. For now, I’ll focus on growing and evolving my sense of self and purpose along with my skills and responsibilities. 2023 was a year of labour — productive. but effortful. For 2024, an adjective I’d like to aim for is ‘flourishing’: growth, but relaxed, free and perhaps more intuitive.

By Janus! It's 2024!

2023 felt like a tempestuous year for the world. For me personally, while it presented a fair few struggles and setbacks, I’m happy to say my experience has actually been mostly good. It still, however, feels good to have it in the rear-view mirror. This year I’ve made a point of taking time off over Christmas and New Year - I’ve especially enjoyed the downtime of the ‘betwixtmas’ - and I’m looking forward to 2024 with a sense of intention and openness to change.

I only partly agree with those who say new year’s resolutions are pointless. The turn of the new year has long been an excuse and a catalyst for change. Those who say ‘if a change is worth making, you’d already be doing it’ miss the fact that the main barrier to much personal change is in the imagining, not the doing — and imagination is a sensitive animal.

Everyday life tends to be busy and focused on immediate objectives. To someone with corporate job and a social media account, it takes some discipline to carve out time to step back from what’s urgent and consider what’s important. The FOMO is strong, the pressure to achieve and a fear of boredom have been programmed into us. So when I do take time off work, I’m usually hit with a wave of pent-up reflectiveness. Add to this the (non-coincidental) fact that the winter solstice, Christmas and New Year all come in close succession with change and renewal themes, and you have a great recipe for reflection, diagnosis and planning.

I’ve also come round to the idea of celebrating the new year itself. Like many rituals, it serves a purpose on a deeper level than simply having a party - for starters, I’m pretty sure I write the date correctly sooner, in years when I marked the new year. (This year I celebrated Hogmanay with a silent disco on the streets of Edinburgh - a rather wonderful experience, which I recommend!)

So whether or not one wants to make or announce ‘resolutions’, the threshold of a new year feels like a more liminal time than others. The boundaries between alternative futures feel thin and the air contains possibility.

I wish you a very happy and prosperous 2024.

A target restaurant experience

Pizza Express is one of the UK’s best-established chain restaurants. I used to go a lot as a kid - more than one friend had their birthday party in a Pizza Express - but somewhere around 2010 they stopped feeling like a “dependable option” and became “tired and mediocre”. Such was the chain’s ubiquity that the brand had its share of unsavoury associations over the years. To me, at least, the brand became invisible - especially when there’s a Franco Manca or a Pizza Pilgrims nearby, both of which offer high quality food at more competitive prices.

But last weekend, for the first time in maybe 12 years, I went to a Pizza Express, and I was genuinely blown away. Not by the quality of the food - it’s back to ‘solidly dependable’ - but by the digital experience. Here are some things I loved.

  • Arrival: whether you book or just show up, you get a good digital experience. Booking is integrated seamlessly with your in-person experience, but you can also ‘check in’ to your specific table after you’ve been seated. The app immediately offers you relevant discounts and menu specials.

  • Ordering: online/in-person equivalence. You can order in the app directly, or let a staff member take your order, and it’s instantly viewable in the app. You can add, amend or remove items, and apply discounts and freebies.

  • Brand continuity across touchpoints: the app lets you order a takeaway or eat-in. Loyalty features are also seamlessly integrated, unobtrusive yet persuasive, and you even get loyalty point if you buy one of their supermarket pizzas and scan a QR code in the packaging.

  • Slick payment: the digital billing process has all the functionality you want. You can view and pay for your bill at any point, and choose how you split the bill. It also works with Google Pay and Apple Pay seamlessly. (You can also pay in-person, of course.)

  • Painless onboarding - the app setup screens had a nice visual look and feel and it didn’t feel onerous or dull. Also it’s only a 50MB download (compared with, for example, the Joe and the Juice app, a poorly functioning 300MB monster which requests permission for all the data on your phone)

Overall, this is a well designed UI and a well-integrated experience for diners, front-of-house and kitchen staff. It’s important to note that you only get something this good when it’s been fully thought through, and sits on a well-designed digital infrastructure with the right organisation to support it.

Knowing that the experience is this good means I’m definitely more likely to consider pizza express now for pre-theatre food, or a casual dinner with friends.

And I’m not alone, it seems — in the two years since this experience has been live, 2 million people have joined the Pizza Express Club (i.e. they’ve got an account on the app) and 150,000 people have signed up through a friend’s referral.

It’ll be interesting to see how the market evolves as other brands catch up. Will we really want a different app for every restaurant we visit? The arms-race continues, but for the moment I think Pizza Express is showing us how it should be done.

So what?

Like many people with ADHD I am motivated by interest. I like learning and engaging with new and challenging topics. While this has many upsides in our modern knowledge economy, it meant I found it hard to present my findings and outcomes in a way that felt relevant, especially to senior audiences.

A little over a year ago, I reached a level where I’d be working directly with a client (responsibility win), but I was also doing the day-to-day work of delivering the project, which I found interesting and satisfying. I’d spend weeks exploring the twists and turns of a topic, talking to users and stakeholders, working with others to understand the minutiae and put it all together again. We’d do great work. But when the time came to play back recommendations to a client or to senior colleagues, certain key points wouldn’t land. My sense of what was ‘actually important’ had been skewed – partly because I hadn’t, by this point, spent much time really interacting with senior clients, but mostly because I was too close to the subject.

Around this time, one person’s advice really stuck (or rather, their exasperated question echoes around my head): What’s the “So what?”

Game-changer.

Take a simple example, let’s say you’re pitching for new work. Compare the following statements:

  • “We’ve done X, Y and Z (presumably impressive things).”

  • “We helped clients overcome challenges just like the ones you currently face, by doing X, Y and Z”

A clear ‘so-what’ guides your audience to the conclusions you want them to draw so it is, in a sense, manipulative - but there is no form of effective communication that isn’t. ‘So-what’ is the difference between death-by-powerpoint and a compelling, actionable piece of insight. It turns ‘interesting’ into ‘important’. And for someone with limited working memory, ‘so-what’ is also easier to remember and more natural to apply than the pyramid principle / top-down thinking, or any of the more elaborate communication frameworks out there. The three-part progression ‘what, so what, now what’ I also avoid, for a different reason: in my line of work, you might want to heavily imply the ‘now what’, but it’s generally a good idea to let your audience make at least one step for themselves!

What a screwdriver tells us about experience strategy

Nest Labs did many amazing things when they produced the first Nest thermostat, but one in particular stands out for me as particularly brilliant.

For the uninitiated: the Nest Learning Thermostat is a very clever gadget. Suffice to say that it was very cleverly engineered, with the ability to learn your behaviour and preferences and respond dynamically like no other thermostat. It was very pretty and intuitive. It was arguably the first truly smart home device. Of course, for a company founded by Tony Fadell, former Apple VP and ‘father of the iPod’, and his team of brilliant engineers and designers, all of this was par for the course.

For me, the stand-out example of Nest’s truly experience-led approach is this humble screwdriver.

Photo credit: notcot.com

Faddell and his team understood that the product’s success doesn’t depend on just about the product, but on the whole experience. Nest was selling thermostats for $249, and the whole experience and brand association had to be just right. In his book ‘Build’, he outlines the journey, for Nest, as a rough set of percentages:

  • 10% of the customer experience is on the website, advertising, packaging and in-store display.

  • 10% is when you install the thing in your home.

  • 10% is looking at and touching the device.

  • 70% is the interaction with the system through the connected app.

To make the experience as positive and seamless as possible, they needed to really think carefully about every step of the journey. The marketing told the story well. The packaging was beautiful. The device itself has a gorgeous, futuristic and slightly otherworldly feeling, like I remember from the first iPhone. And the app was, of course, well designed.

Nest thought carefully about how users would install it, too. They designed an easy process with simple instructions anyone could follow. They prototyped it. It worked. Then they sent an early product to real users, to install by themselves, and - oh no - it took them an hour to install: way too long, for a supposedly quick and easy upgrade. But then they saw that users actually spent over half of that hour simply looking for the right tools. Once they had those, the job took 20 minutes.

With that insight, framed in that way, including a screwdriver in the box with the Nest seems like a no-brainer. In Fadell’s own words, it turned a moment of frustration into a moment of delight.

I also love Fadell’s account of the unexpected consequences of including this tool, up and down the customer journey. After installation, the screwdriver would find its way into a top drawer somewhere, and customers would use it for odd jobs around the house. They’d see the logo. They’d smile. The screwdriver became a symbol of a high-quality, thoughtful brand experience. It was expensive to produce and include: it ate into margins, and lots of employees wanted to get rid of it. But it made its cost back, many times over, both through its impact on customer loyalty, and by saving money on phone support.

To me, this example illustrates the value of good user research, and of taking a degree of responsibility for curating the end-to-end experience. In other contexts, like, say, a government service, this kind of thinking might not produce a physical object like a screwdriver - depending on the user need, you might end up with a better-worded guidance page, or an email notification arriving at the right time. The point is that the product is only part of the experience, and it’s the experience that actually matters to the user.

Or, to put it another way: a product is only as valuable as the experience it’s a part of.

Hybrid by default, or by design?

Many of us have embraced the benefits of the hybrid life as part of the ‘new normal’. Of course some jobs can’t be done remotely, and some personality types do not thrive in a home office. But most office-based organisations can and should be hybrid. Giving employees flexibility in how and where they do their jobs is good for business, and makes the workplace more accessible for neurodiverse people (like me!).

Co-location has benefits - creativity and networking being oft-cited as ‘better in-person’. Many companies are introducing mandatory office days as managers want to ‘return’ to ‘normal’. But they’re missing an opportunity. Office days are a deeply unimaginative default solution to a problem defined without nuance. They are unpopular and counter-productive, and don’t actually solve the problems they’re supposed to. Too often they simply create an oppressive hellscape of noisy colleagues on back-to-back video calls.

The old office isn’t coming back (thankfully), but it’s not enough to be ‘hybrid by default’ - we need to be ‘hybrid by design’. This means actively and intentionally finding cadences, rituals and tools that work. What’s needed is a critical eye on how to design the ‘best of both worlds’ - but they first need to understand what problem they’re really solving. Define the brief, to define what type of organisation you actually need, and then design it.

As ever, we must start with the users. Who are they (or rather, how might we define the user groups), what do they need, and what are their goals? For example:

  • A creative team might need a way to spontaneously drop into meetings and easily share craft skills and know-how

  • A policy team needs to take account of a broad range of expertise and perspectives as it draws up decisions (and do so efficiently)

  • A leader needs a way to monitor teams’ KPIs so they have an idea of progress, and understand how to support employees

  • At an enterprise level, there’s a requirement for solutions to be compatible with existing systems and ways of working

Note that the above is worded to be solution-agnostic. With these, you can then develop a “hybrid by design” setup that coherently serves the needs of a whole organisation, not just its leaders. It also helps avoid the cardinal sin of ‘tech for tech’s sake’.

We’ve seen strides in the humanisation of remote work, with excellent tools like Slack and Mural/Miro, which allow for more visual, human and ‘natural’ ways of interacting. As with anything, there are ways of using these more or less well. Aside from these, here are a few things from my recent experience that I think organisations could use (or use better) to be hybrid-by-design:

  • Online ‘live’ documents - Word, powerpoint and excel - “Underused? Surely not” - Yes! Have you any idea how many meetings and emails could be avoided by better use of these tools? Asynchronous document review is so powerful, but requires active participation.

  • Team ‘culture’ meetings. “What? A meeting?!” - Culture needs a space to live, and if it’s not in an office it should be somewhere else. Leaders need to own this, by being present and perhaps shaping and permission-giving, but the members make the team. An organisation lives or dies by its culture. (If you lead a team and you’re rolling your eyes at this one, I have news for you.)

  • Office simulators’ - a really interesting emerging category. Gather.town is a good example. It reduces the need for formal meetings and allows for more natural water-cooler style interaction. Proof that effective tech solutions can still low-res, and that an actually useful ‘metaverse’ might not be one that doesn’t simply try to look the same as reality.

What have you found effective in your hybrid working world?

Introducing 'Experience Points'

I decided to reboot this blog, after a five-year hiatus. While on some level I trust that my perspective and opinions have value, impostor syndrome is very much a thing, and creates a lot of internal resistance around writing publicly. But thought leadership is trendy, and the process of writing (and actually clicking ‘publish’) helps me develop skills, knowledge and my #personalbrand. This blog is, in short, something I feel I need to do.

So welcome to my newly-named ‘Experience Points’ blog. What’s in a name? I love a double - or in this case triple - meaning…

  • Experience strategy will be a main topic of these posts. I’m a user-centred designer at heart, but I now work at the level of enterprises and ecosystems, mapping journeys across all touchpoints, not just an individual product. The jargon of the day calls this ‘customer experience strategy’ - not to be confused with more granular UX, which is also important.

  • Points informed by my experience. There will be other things I write about besides experience strategy. I love the idea of keeping a blog as a virtual paper-trail of accumulated wisdom: we are, after all, the sum total of our experiences and relationships.

  • I’m a nerd. Real life doesn’t give us an XP counter, and we rarely have discrete ‘level-up’ moments. Calling this blog my XPs makes it feel like I’m on a bit of a quest. Yes, there’s a twinge of irony there: I have a strong aversion to the ‘let’s gamify everything’ trend. So let’s just say it’s part nerd culture in-joke, part playful ego-boost.

I’m leaving the older posts that were here prior to 2018 for now, because I think they’re still interesting and relevant. (I also wouldn’t want to think that my ‘creative technologist’ days are truly over!)

Whomsoever you may be, I hope you enjoy the content. I welcome conversation: feel free to comment or contact me.

Signal and Noise / MIT Media Lab Berlin Workshop: thoughts and reflections

The 'church of hacking' - S. Elisabeth, Mitte, Berlin

The 'church of hacking' - S. Elisabeth, Mitte, Berlin

Last week I had the privilege of attending a week-long event in Berlin with the MIT Media Lab, as one of about 50 participants drawn from around the world. This event was a 5-day residential hackathon, and also an immersion in Berlin's vibrant arts and tech scene. It really highlighted for me the power of bringing diverse groups of people and ideas together to create extraordinary new things, which is pretty much the reason I became a designer in the first place.

 

The Project

I was part of the team working on ‘Technologies for communication with the Deaf’. This track - one of five in the workshop - was led by Harper Reed and Christine Sun Kim, which was, I gotta say, awesome in itself. The team was also chock-full of expertise covering design thinking, machine learning and computer science, cognitive science, sign language and Deaf culture, and the science and art of scent. Such diversity led to rich and deeply engaging discussions throughout the week, both over post-its and over beers.

One revelation for me was learning about and engaging with Deaf culture for the first time. I learned, for example, that Deaf people were systematically oppressed for the best part of a century, thanks in large part to campaigning by Alexander Graham Bell (who turns out to have been a eugenicist as well as an inventor). Throughout the week we were accompanied by three interpreters whose job was to translate between hearing and Deaf members of our team. When not interpreting, it was also interesting hearing their own experiences and perspective of Deaf culture. I was struck by how many themes and ideas you hear about in colonialism and class oppression are also relevant to discussions about disability.

Our project followed a classic 'design thinking' kind of process: problem-framing, selection/synthesis of a single brief (the 'how might we...' question), ideation, prototyping and presentation. We started talking through the points and possibilities of the project on Tuesday, and after at least twelve square meters' worth of post-it notes, more discussion and a bit of frantic coding and presentation-building, we had a working (!) app and a relatively polished presentation by 1pm on Friday.

The concept was to create a digital commons for sign languages and Deaf culture. My teammate Zoë and I presented the project to a 150-strong audience at the end of the week. I’ll cover the project in more depth in a future post - suffice to say that it's very exciting.

 

The Inspiration

Any good hackathon includes a bit of inspiration up front to push participants slightly beyond their comfort zone. (It's hard to imagine an event in Berlin that could fail to do this.)

First up, we went to visit Tomás Saraceno and his team in a repurposed industrial complex in the south of Berlin. The studio, which creates installation art and sculpture, often brings together a range of scientific disciplines in its delivery. I’m not able to post any pictures of what I saw there. We spoke to biotremologists, arachnologists and aerospace engineers, as well as Tomás himself. I can say that there is some interesting cross-disciplinary stuff happening there.

After ramen by the river in Kreutzberg, we went to see Mark Verbos, a musician, engineer and entrepreneur who makes “analog synthesizer modules of the highest order”. With two deaf people in our group, it was fascinating to see/hear Mark’s explanations of how the physics of music affects the hearing experience (eg. the difference between pitch and tone) using a lot of visual and vibrational similes. The synths themselves were truly the work of a craftsman, and I enjoyed his reasoning for the placement of the dials, buttons, sliders and connectors that modulate the signal as it passes through the synth (the music flows from bottom left to top right).

Verbos Electronics

Verbos Electronics

Finally, after a refreshingly direct talk from none other Mitch Altman, the 'Jonny Appleseed of hackerspaces', we rounded off the day with an informal visit to one of Berlin's - and indeed the world's - earliest established hackerspaces: C-Base. This place is practically sacred ground for the Maker movement. Founded in 1995, only six years after the wall came down, C-Base is one of the most important hackerspaces in the world (not just because the space looks like a cross between a Borg ship and the Millennium Falcon – though that is pretty distinctive). For the past two decades C-Base has been a focus and a catalyst for open-source thinking and technological development, with worldwide impact. The Chaos Computer Club is also closely connected to the space, and the German Pirate Party was founded there in 2006 (whose one sitting MEP, Julia Reda, joined us for a talk on Friday evening!). We ended the day chatting over beers and Club Mate from C-Base’s extremely reasonably priced bar, while the evening light faded over the river Spree.

This was an exceptional day by any standards. I was incredibly glad to have taken my notebook with me and made sketches and notes throughout - I can see myself going back to them often in the coming months. Other trips and talks organised throughout the week were also noteworthy - dinner at the famous Due Forni pizza restaurant, a panel discussion with Joi Ito, Jörg Dräger and Julia Reda MEP, an experimental dance performance - but it was astonishing how much we covered on this one day.

 

The cohort

Alongside ‘the Deaf track’ we also had teams working on Music technology; Blockchain applications; Playful AI; and Virtual Reality storytelling. You could almost feel the passion and curiosity crackle in the air all week, as people from vastly different backgrounds got to know each other and created sparks of inspiration. This reminded me a little bit of the collegiate system from my undergraduate degree, where people studying different academic disciplines live together in little village-like communities. The main difference was the greater leaning towards technology and its social and human implications; and also the more ‘rarefied’ concentration of people who had a demonstrable passion for their topic. I made new friendships and renewed existing ones. I learned a ton about AI and blockchain. I also learned a bit about the week's sponsors, Lego and BCG Digital Ventures, which are both doing interesting and important stuff with creative and socially-aware application of technology.

 

The 'ideas-nexus': Reflections

Having heard about the MIT Media Lab time and again while at the RCA, I was intrigued to have some first-hand experience with the organisation. I can safely say that it was one of the best (and best organised) technology-related events I have ever attended. I left knowing much more about the Media Lab, what it is and how it works, and I am full of respect for what it does.

To draw similarities between the Media Lab and something like C-Base might seem absurd: the Media Lab is bigger, more academic- and commercially-minded, and also much better-funded (and marketed). Superficially the similarity seems ends at ‘technology and geeks’. But beyond this, I think there’s something important they share, which I have difficulty articulating. It has something to do with the power to convene and catalyse conversations across very different disciplines. They represent a kind of connective tissue in our global culture - a channel or a forum for ideas to come together in a discipline-agnostic and respectful yet dynamic way - in between the sciences, engineering, art and design, and everything else.

2018-08-15 17.27.42-1.jpg

It’s not just about making new technology – it’s about taking responsibility for technology and using its potential it to build a better world for everybody.

This is totally my jam.

 

Thanks #MLBerlin!

The problem with behaviour design

The term ‘persuasive technology’ covers a broad set of technologies and techniques combining computer science with behavioural psychology. It’s particularly topical today because of the recent (unsurprising) revelations about Facebook and what it’s been doing with user data. But it’s something I’ve been thinking about for a while, because it’s interesting and has a lot of practical consequences for life in the 21st century.

It’s no surprise that interactive digital technology can be designed to change people’s behaviour. After all, lots of things can change our behaviour, from an attention-grabbing ad campaign to a road design that makes drivers respond differently. But we sometimes forget that design has ethical consequences: a charismatic brand can mask environmental destruction; good road design can save lives.

‘Persuasive technology’ is in the same vein, but in the digital context it represents something new. This is because it’s possible to create digital interfaces that are ‘mass-customised’ to individual users, learning from their past behaviour to target them better. This isn't persuasive in the traditional, rhetorical sense, because it’s not about using language, logic or delivery to change someone’s mind: it’s about bypassing a user’s rational thought process to change their behaviour.

Nudge nudge

In the 1960s, the psychologists Kahneman and Tversky started exploring the ‘predictably irrational’ ways that we humans behave. They reasoned that humans have two cognitive systems – System 1 and System 2 – which work in parallel. System 1 is fast, involuntary and intuitive, whereas System 2 is deliberate and rational. We need a balance of both in order to survive, and the interplay between them is what makes us human (one of the challenges I faced, moving into creative work, involved learning to balance the two systems, move between them, and to muzzle one or other at different times). Through this framework, and by exploring other facets of perception and decision-making, the researchers explained how well-known cognitive biases such as the anchoring effect, or risk aversion, can arise. Walter Mischel used a similar framework, using the terms ‘hot’ and ‘cold’ decision-making to explore willpower, starting with the famous marshmallow test.

Then, in the 80s and 90s, Richard Thaler used these ideas as the foundation for Nudge theory. Nudging is about using cognitive insights to help people make ‘better’ decisions. You don’t have to be gullible for a nudge to work – you just have to be human. Nudges have indeed been shown to be effective at helping people, especially in situations where there is a mismatch between long- and short-term interests, like saving for pensions or helping someone lose weight. 

Speeding nudge: the emoji on these signs work on our desire for social approval.

Speeding nudge: the emoji on these signs work on our desire for social approval.

Nudges can also be used to exploit people against their own interests – as in the so-called ‘Dark Patterns’ of digital user experience design. Again, this isn’t a question of having poor judgment or being credulous: it's something that, for the most part, bypasses reason. We are all susceptible to this because our human brains have to use heuristics – shortcuts to simplify decisions – and for the most part, we don’t even realise we’re doing it. This means that if someone wants to exploit us by designing their product or website to bias users towards certain outcomes, they can. Moreover, if they are able to analyse some data on your past behaviour, the company might be able to predict what kind of nudges you’d be likely to respond to, and adapt the site instantly. We, the users, usually have no idea this is happening. (For more on psychological profiling see Apply Magic Sauce, by Cambridge University)

 

Exploiting internal tensions

BJ Fogg, the Stanford academic, popularised the phrase ‘persuasive technology’ from the 1990s. His simple ‘Fogg Behaviour Model’ three conditions for a user’s behaviour to be shunted towards a particular path:

  1. The user wants to do the thing,

  2. They have the ability to do it, and

  3. They have been prompted to do it (‘triggered’).

The design and timing of the trigger is especially important, since it works best if it speaks to the instinctive/involuntary/‘hot’ system.

If, for example, someone wants to eat more healthily (step 1), but doesn’t because it’s too much effort, they could make healthy food more convenient, for example by stocking up on vegetables (step 2), and litter their kitchen with ‘triggers’ to inspire healthy eating just at the moments when they’re thinking about food – perhaps some pictures of healthy food or fit people stuck to the fridge (step 3).

The simplicity of the Fogg model is part of its power. The ethical problems come into focus when we unpack it a bit, and ask who is changing whose behaviour, and why.

Looking at step 1, there are a lot of other things that we might ‘want’ to do on some basic level, but which we know we’d be better off if we didn’t. For example: if I have a pub lunch on a weekday, it might be nice to have a beer but I know it’ll make me sluggish in the afternoon, so I don’t. But a combination of factors might convince me otherwise – from peer pressure to nice weather. In this case, the ‘hot triggers’ will have gotten me to change my behaviour. This example is innocuous but lots of things fall into this category: drinking, gambling, overeating – and addiction to social media.

Credit: Centre for Humane Technology (http://humanetech.com/app-ratings/

Credit: Centre for Humane Technology (http://humanetech.com/app-ratings/

Behaviour-change models have provided us with a set of tools, nothing more. But an internet funded by advertising seems to lead inexorably to business models based on behavioural exploitation. The consequences of this include addiction, social anxiety, poor mental health, and more. Grindr makes its users miserable (but they still use it). Is this just the price we must pay for a functioning internet? Zuckerberg’s comment that "there will always be a free version of Facebook" suggests he believes data exploitation is here to stay - at least for most people.

How could we design business models for digital tech that are aligned with individual and collective wellbeing?

These questions are likely to become more pressing as digital interactions are built into the physical fabric of our world, and digital interactions are increasingly manifested in the physical world – so-called ‘ubiquitous computing’ and ‘ambient intelligence’. The Facebooks and Googles of the world will be keen to track our behaviour throughout, to improve their targeting algorithms and throw more effective ‘hot triggers’ at us, wherever our attention may be directed at the time. (But tellingly, Facebook execs ‘don't get high on their own supply’.)

What next?

In a world of ‘smart’ nudges, do free will and personal responsibility still mean the same things? Probably not. 

As BJ Fogg himself says: "you should design your life to minimise your reliance on willpower." This includes everything from turning off phone notifications, to choosing products and services that help you achieve what you want with a minimum of exploitation.

My position is ambivalent with regards nudging / persuasive technology - I see it as a designable layer of experience, another material for designers to work with. And just as with materials selection, we need to be aware of our ethical responsibilities. The design theorist P Verbeek suggests that, as users, we should educate ourselves better about how we are influenced. But in practice the time and headspace this would require is a luxury that many people can’t afford. I have less of a problem with the tools themselves, than with the intention behind them.

It’s interesting to consider how organisations and projects could move in a more humane direction, and what a humane internet might look like. The Mozilla Foundation, the EFF, the Centre for Humane Technology, and the customer commons initiative are pushing for positive change in different ways. My own 2017 project, Treasure, took an experimental approach to create practical interventions that help users identify and serve their longer-term financial interests and thereby resist short-term exploitation. 

Digital space has become an exploitative 'wild west', like a city without zoning laws, and I agree with Tristan Harris of the CHT that more should be done to protect us from being manipulated unfairly. This might include regulation, for example with enforced transparency - but this space needs a culture change more than check-boxes. A ‘humane’ approach to digital technology requires a transformational rethink of business models, encompassing branding and product design. This is an exciting creative challenge, and a huge opportunity.

The craft of materials

Photo: Wikimedia Commons, Joost Ijmuiden

Photo: Wikimedia Commons, Joost Ijmuiden

On a sailing boat, you feel exposed, vulnerable even, but also closely connected to the forces of nature. You feel the sea and the wind through the tension and vibration of the ropes, the pressure on the tiller and the movements of the deck. The boat acts as a vector for this connection, channelling streams of tactile information to you, and enabling you to act through it and negotiate your way across the water.

Ashby chart for materials selection (Credit: Granta design)

Ashby chart for materials selection (Credit: Granta design)

In the past, I’ve discussed how designed objects represent a kind of ‘interface’ between ourselves and the world. This interface, and the way it is designed, can determine our sense of connectedness with things outside ourselves. It can make environmental sensations feel close and immediate, or push them into irrelevance.

I've now completed two university degrees - the first in materials science and the second focused on design. In my undergrad, materials selection was touched on briefly, but the general impression was that materials selection can – ‘should’ – be reduced to a mathematical process. Having characterised all the available materials and quantified their various properties, you can just plot a chart, or feed the information into an algorithm, and find the material that has the right balance of mechanical, electrical, thermal (etc.) properties, and cost.

One thing rarely mentioned in the science/engineering context is that our subjective experience of materials forms a fundamental part of human experience. A wooden ship will be profoundly different to sail compared to a modern fibre-glass yacht with aluminium mast, while a large steel cruise ship is designed to eliminate the sensations of the sea for its passengers.

Different styles of ocean transport evolved from material capabilities - and result in different experiences. (photo credit: Clipper Round the World Race; Cunard)

Different styles of ocean transport evolved from material capabilities - and result in different experiences. (photo credit: Clipper Round the World Race; Cunard)

As materials scientists keep creating new materials, designers have an ever-widening palate from which to craft experiences. Of course, each new material came into use for reasons that may not have included the experience: perhaps it let people do things they couldn’t before, or improved safety, or was cheaper. My point is simply that materials mediate experience - and we now have choices.

With my infiltration of the design world, it seems clear that the emphasis over here is much more on these subjective properties of materials. I agree with this view, to a point – aesthetics and subjective experiences clearly matter. When you interact with a product, your opinions of how it looks and feels all get rolled into your overall ‘product experience’. The way you feel when you use a product can affect how easy you find it to use, whether you’re likely to use it again, and so on – and this is all partly because of materials. When Apple wanted the original iPhone to be ‘seductive’, they weren’t just talking about the digital interface, but also the sleek mirror finish, the curved steel body, the expensive-seeming weight, and so on. The physicality and materiality of the object speaks the same language as Apple’s digital experience. (This sense of coherence is precisely what makes Apple products so pleasant to use.)

The original iPhone (photo: Apple)

The original iPhone (photo: Apple)

I think of materials as actors, performing different roles in an experience – which can be a product or environment. Like actors, a material can play a range of roles, but they still have a kind of underlying character or personality. Woods are usually characterised as warm, organic and pliable, and metals as cold, industrial and formal: these sensory associations derive from physical properties like mechanical elasticity, density, thermal conductance and colour, and so are remarkably consistent across cultures. I think these associations still hold when the material in question is structural rather than aesthetic, as with the ships mentioned above (the only difference being that we sense the material’s character with apparatus other than our eyes).

There are exceptions to the general classifications – the formality of ebony, the warmth of copper – but if anything, these prove the point that each material has a distinctive character. And just like actors, materials can be typecast (yet another glass-and-steel skyscraper) and mimicked (concrete apartment blocks with a veneer of brick to look traditional).

I find the old modernist notion of ‘truth to material’ fascinating – although of course today every rule exists to be broken, every philosophy subverted. Materials are an essential part of all physical and digital objects, and materials selection is a vast topic. Stacks of books have been written about it; there are several large companies specialising in it. I’m not proposing a grand unified theory: quite the opposite, I think materials selection is a practice, a modern craft.

Decisions about physical form and materiality can’t be considered in isolation to each other. Materials awareness, and intelligent materials selection decisions are critical to the success of any design – its popularity, longevity, and afterlife – and thus to humanity as a whole.

606 Universal Shelving System by Dieter Rams. Beautiful, understated, thoughtful synthesis of material and form.

606 Universal Shelving System by Dieter Rams. Beautiful, understated, thoughtful synthesis of material and form.

Note to self

photo: S Roots

photo: S Roots

I find writing helps me clarify my ideas and communicate my perspective. My thought process is sometimes scatty and chaotic (probably related to my mild dyspraxia) so writing is like sending my scattergun thoughts through a filter to see what actually makes sense. In addition, as I am relatively new to design and the creative industries, writing about design works like a kind of permission-giving exercise. Occasionally I may even post updates about my own design practice.

In this post I'm thinking about what drives me as a designer. The full answer to this would be too long for a blog, and too boring for a book. So here's a summary.

I design / am a designer because…

1.      It's fun

Design is about finding the place where human needs meet technological possibility. It’s kind of a wondrous thing to be able to create technology at all. Design is a constantly evolving challenge. It’s fun, it’s energising, and it’s addictive. I like working with complexity, and creating ways to simplify / interact with / cultivate it. People are endlessly fascinating, and the physical world of materials and processes for making things is just so incredible now.

2.      It's a good strategy for me

I’m future-proofing my career by specialising in creative, critical thinking and working with people. Design as a profession is vast and hard to define (in English, anyway), so this means I have incredible freedom to work on things that I find fascinating. As a freelancer I am a rarity – there aren’t many designers with a Master’s in materials science, so my perspective on design tends to be a little different from those trained as designers from day one. Finally, I am naturally quite a thoughtful and introverted person so I need an occupation that pushes me to talk to people a lot and get me out of my comfort zone!

3.      There's opportunity for impact

I often think I can do better than what currently exists. When I do, I love the satistfaction of making something that makes someone’s life more meaningful, easy, fun and/or awesome. As a designer, I'm driven by my values, and while fair pay is essential I don't believe in just working for money. Ultimately I want to live in a clean, zero-waste, high-tech, socially inclusive world, and I’m angry that the current system still causes so much devastating damage. There’s something I like about design that there’s an emphasis on function and usage, to the point that one measure of an effective design is how widely it gets used. (Whereas, say, artistic practice puts the emphasis more on conceptual or aesthetic results, and doesn’t rely on a ‘market’ as such. Having said that, I do like that there are big overlaps and fuzzy, permeable boundaries between these disciplines.)

4.      I feel inspired

I always wanted to be an inventor (even though it took me a while to get there). Growing up, I used to watch a lot of sci-fi (and still do). Visions of future worlds have inspired a lot of design, from Jules Verne to le Corbusier, to Blade Runner and Black Mirror. But I think designers, innovators and entrepreneurs are the ones who really create and implement changes in the world and how we live. Many of the people I have admired at one time or another were designers and innovators – Charles and Ray Eames, Dieter Rams, Gaudi, Buckminster Fuller – and Roman Mars played a crucial role in my early design education (and still does). But the person who first really put me onto design was Bill Moggridge. I saw a short film about the guy when I was 25 and it blew my mind. He spoke about the role of design and its importance in the world, in terms of deciding how things ought to be. It struck me as something profoundly interesting and important, and deeply, wonderfully, human.

An eventful summer...

It's been a busy time over at Chateau Roots these past couple of months. Alongside various weddings of friends and family, I've also done two weeks' jury service and seen a couple of interesting developments regarding projects I've worked on:

- POM has taken investment and the team has been invited to take up residency at InnovationRCA for the coming academic year

- Treasure has been invited to exhibit at the 2017 Global Grad Show in Dubai on 13-18 November

- Through Treasure, I was invited to write a blog post for the Money and Mental Health Policy Institute (pub. 23 August)

- I've also had an exciting proposal regarding Treasure to test it in a controlled setting. (Watch this space...) 

Looking forward to seeing how this all pans out.

Welcome to the new website

Hello world!

I graduate from the RCA in one week, so it seemed a good time to get my online presence in order. If you're looking for a designer / maker / engineer, or someone who can facilitate innovation workshops, I'd love to hear from you - please use the contact page to get in touch.