I realised that there are some useful lessons I learned from improv when I used to do it, once upon a time. Here are three that I’ve managed to distil into words.Read More
Some things that I’ve found are important to understand as a hearing person thinking about technology for Deaf peopleRead More
Last week I had the privilege of attending a week-long event in Berlin with the MIT Media Lab, as one of about 50 participants drawn from around the world. This event was a 5-day residential hackathon, and also an immersion in Berlin's vibrant arts and tech scene. It really highlighted for me the power of bringing diverse groups of people and ideas together to create extraordinary new things, which is pretty much the reason I became a designer in the first place.
I was part of the team working on ‘Technologies for communication with the Deaf’. This track - one of five in the workshop - was led by Harper Reed and Christine Sun Kim, which was, I gotta say, awesome in itself. The team was also chock-full of expertise covering design thinking, machine learning and computer science, cognitive science, sign language and Deaf culture, and the science and art of scent. Such diversity led to rich and deeply engaging discussions throughout the week, both over post-its and over beers.
One revelation for me was learning about and engaging with Deaf culture for the first time. I learned, for example, that Deaf people were systematically oppressed for the best part of a century, thanks in large part to campaigning by Alexander Graham Bell (who turns out to have been a eugenicist as well as an inventor). Throughout the week we were accompanied by three interpreters whose job was to translate between hearing and Deaf members of our team. When not interpreting, it was also interesting hearing their own experiences and perspective of Deaf culture. I was struck by how many themes and ideas you hear about in colonialism and class oppression are also relevant to discussions about disability.
Our project followed a classic 'design thinking' kind of process: problem-framing, selection/synthesis of a single brief (the 'how might we...' question), ideation, prototyping and presentation. We started talking through the points and possibilities of the project on Tuesday, and after at least twelve square meters' worth of post-it notes, more discussion and a bit of frantic coding and presentation-building, we had a working (!) app and a relatively polished presentation by 1pm on Friday.
The concept was to create a digital commons for sign languages and Deaf culture. My teammate Zoë and I presented the project to a 150-strong audience at the end of the week. I’ll cover the project in more depth in a future post - suffice to say that it's very exciting.
Any good hackathon includes a bit of inspiration up front to push participants slightly beyond their comfort zone. (It's hard to imagine an event in Berlin that could fail to do this.)
First up, we went to visit Tomás Saraceno and his team in a repurposed industrial complex in the south of Berlin. The studio, which creates installation art and sculpture, often brings together a range of scientific disciplines in its delivery. I’m not able to post any pictures of what I saw there. We spoke to biotremologists, arachnologists and aerospace engineers, as well as Tomás himself. I can say that there is some interesting cross-disciplinary stuff happening there.
After ramen by the river in Kreutzberg, we went to see Mark Verbos, a musician, engineer and entrepreneur who makes “analog synthesizer modules of the highest order”. With two deaf people in our group, it was fascinating to see/hear Mark’s explanations of how the physics of music affects the hearing experience (eg. the difference between pitch and tone) using a lot of visual and vibrational similes. The synths themselves were truly the work of a craftsman, and I enjoyed his reasoning for the placement of the dials, buttons, sliders and connectors that modulate the signal as it passes through the synth (the music flows from bottom left to top right).
Finally, after a refreshingly direct talk from none other Mitch Altman, the 'Jonny Appleseed of hackerspaces', we rounded off the day with an informal visit to one of Berlin's - and indeed the world's - earliest established hackerspaces: C-Base. This place is practically sacred ground for the Maker movement. Founded in 1995, only six years after the wall came down, C-Base is one of the most important hackerspaces in the world (not just because the space looks like a cross between a Borg ship and the Millennium Falcon – though that is pretty distinctive). For the past two decades C-Base has been a focus and a catalyst for open-source thinking and technological development, with worldwide impact. The Chaos Computer Club is also closely connected to the space, and the German Pirate Party was founded there in 2006 (whose one sitting MEP, Julia Reda, joined us for a talk on Friday evening!). We ended the day chatting over beers and Club Mate from C-Base’s extremely reasonably priced bar, while the evening light faded over the river Spree.
This was an exceptional day by any standards. I was incredibly glad to have taken my notebook with me and made sketches and notes throughout - I can see myself going back to them often in the coming months. Other trips and talks organised throughout the week were also noteworthy - dinner at the famous Due Forni pizza restaurant, a panel discussion with Joi Ito, Jörg Dräger and Julia Reda MEP, an experimental dance performance - but it was astonishing how much we covered on this one day.
Alongside ‘the Deaf track’ we also had teams working on Music technology; Blockchain applications; Playful AI; and Virtual Reality storytelling. You could almost feel the passion and curiosity crackle in the air all week, as people from vastly different backgrounds got to know each other and created sparks of inspiration. This reminded me a little bit of the collegiate system from my undergraduate degree, where people studying different academic disciplines live together in little village-like communities. The main difference was the greater leaning towards technology and its social and human implications; and also the more ‘rarefied’ concentration of people who had a demonstrable passion for their topic. I made new friendships and renewed existing ones. I learned a ton about AI and blockchain. I also learned a bit about the week's sponsors, Lego and BCG Digital Ventures, which are both doing interesting and important stuff with creative and socially-aware application of technology.
The 'ideas-nexus': Reflections
Having heard about the MIT Media Lab time and again while at the RCA, I was intrigued to have some first-hand experience with the organisation. I can safely say that it was one of the best (and best organised) technology-related events I have ever attended. I left knowing much more about the Media Lab, what it is and how it works, and I am full of respect for what it does.
To draw similarities between the Media Lab and something like C-Base might seem absurd: the Media Lab is bigger, more academic- and commercially-minded, and also much better-funded (and marketed). Superficially the similarity seems ends at ‘technology and geeks’. But beyond this, I think there’s something important they share, which I have difficulty articulating. It has something to do with the power to convene and catalyse conversations across very different disciplines. They represent a kind of connective tissue in our global culture - a channel or a forum for ideas to come together in a discipline-agnostic and respectful yet dynamic way - in between the sciences, engineering, art and design, and everything else.
It’s not just about making new technology – it’s about taking responsibility for technology and using its potential it to build a better world for everybody.
This is totally my jam.
The term ‘persuasive technology’ covers a broad set of technologies and techniques combining computer science with behavioural psychology. It’s particularly topical today because of the recent (unsurprising) revelations about Facebook and what it’s been doing with user data. But it’s something I’ve been thinking about for a while, because it’s interesting and has a lot of practical consequences for life in the 21st century.
It’s no surprise that interactive digital technology can be designed to change people’s behaviour. After all, lots of things can change our behaviour, from branding to road design. Moreover, this kind of ‘persuasion’ has moral consequences: a charismatic brand can mask environmental destruction; good road design can save lives.
‘Persuasive technology’ is new and different because of two things: the psychological level on which the persuasion works, and the automation of the technology. This isn't persuasion in the rhetorical sense, because it’s not about using language, logic or delivery to change someone’s mind: it’s about bypassing a user’s rational thought process to change their behaviour. This should really be called ‘automated nudges’. At this point it’s worth a small diversion into what a ‘nudge’ is…
In the 1960s, the psychologists Kahneman and Tversky started exploring the ‘predictably irrational’ ways that we humans behave. They reasoned that humans have two cognitive systems – System 1 and System 2 – which work in parallel. System 1 is fast, involuntary and intuitive, whereas System 2 is deliberate and rational. We need a balance of both in order to survive, and the interplay between them is what makes us human (one of the challenges I faced, moving into creative work, involved learning to balance the two systems, move between them, and to muzzle one or other at different times). Through this framework, and by exploring other facets of perception and decision-making, the researchers explained how well-known cognitive biases such as the anchoring effect, or risk aversion, can arise. Walter Mischel used a similar framework, using the terms ‘hot’ and ‘cold’ decision-making to explore willpower, starting with the famous marshmallow test.
Then, in the 80s and 90s, Richard Thaler used these ideas as the foundation for Nudge theory. Nudging is about using cognitive insights to help people make ‘better’ decisions. You don’t have to be gullible for a nudge to work – you just have to be human. Nudges have indeed been shown to be effective at helping people, especially in situations where there is a mismatch between long- and short-term interests, like saving for pensions or helping someone lose weight.
Nudges can also be used to exploit people against their own interests – as in the so-called ‘Dark Patterns’ of digital user experience design. Again, this isn’t a question of having poor judgment or being credulous: it's something that, for the most part, bypasses reason. We are all susceptible to this because our human brains have to use heuristics – shortcuts to simplify decisions – and for the most part, we don’t even realise we’re doing it. This means that if someone wants to exploit us by designing their product or website to bias users towards certain outcomes, they can. Moreover, if they are able to analyse some data on your past behaviour, the company might be able to predict what kind of nudges you’d be likely to respond to, and adapt the site instantly. We, the users, usually have no idea this is happening. (For more on psychological profiling see Apply Magic Sauce, by Cambridge University)
Exploiting internal tensions
BJ Fogg, the Stanford academic, popularised the phrase ‘persuasive technology’ from the 1990s. His simple ‘Fogg Behaviour Model’ three conditions for a user’s behaviour to be shunted towards a particular path:
- The user wants to do the thing,
- They have the ability to do it, and
- They have been prompted to do it (‘triggered’).
The design and timing of the trigger is especially important, since it works best if it speaks to the instinctive/involuntary/‘hot’ system.
If, for example, someone wants to eat more healthily (step 1), but doesn’t because it’s too much effort, they could make healthy food more convenient, for example by stocking up on vegetables (step 2), and litter their kitchen with ‘triggers’ to inspire healthy eating just at the moments when they’re thinking about food – perhaps some pictures of healthy food or fit people stuck to the fridge (step 3).
The simplicity of the Fogg model is part of its power. The ethical problems come into focus when we unpack it a bit, and ask who is changing whose behaviour, and why.
Looking at step 1, there are a lot of other things that we might ‘want’ to do on some basic level, but which we know we’d be better off if we didn’t. For example: if I have a pub lunch on a weekday, it might be nice to have a beer but I know it’ll make me sluggish in the afternoon, so I don’t. But a combination of factors might convince me otherwise – from peer pressure to nice weather. In this case, the ‘hot triggers’ will have gotten me to change my behaviour. This example is innocuous but lots of things fall into this category: drinking, gambling, overeating – and addiction to social media.
Behaviour-change models have provided us with a set of tools, nothing more. But an internet funded by advertising seems to lead inexorably to business models based on behavioural exploitation. The consequences of this include addiction, social anxiety, poor mental health, and more. Grindr makes its users miserable (but they still use it). Is this just the price we must pay for a functioning internet? Zuckerberg’s comment that "there will always be a free version of Facebook" suggests he believes data exploitation is here to stay - at least for most people.
How could we design business models for digital tech that are aligned with individual and collective wellbeing?
These questions are likely to become more pressing as digital interactions are built into the physical fabric of our world, and digital interactions are increasingly manifested in the physical world – so-called ‘ubiquitous computing’ and ‘ambient intelligence’. The Facebooks and Googles of the world will be keen to track our behaviour throughout, to improve their targeting algorithms and throw more effective ‘hot triggers’ at us, wherever our attention may be directed at the time. (But tellingly, Facebook execs ‘don't get high on their own supply’.)
In a world of ‘smart’ nudges, do free will and personal responsibility still mean the same things? Probably not.
As BJ Fogg himself says: "you should design your life to minimise your reliance on willpower." This includes everything from turning off phone notifications, to choosing products and services that help you achieve what you want with a minimum of exploitation.
My position is ambivalent with regards nudging / persuasive technology - I see it as a designable layer of experience, another material for designers to work with. And just as with materials selection, we need to be aware of our ethical responsibilities. The design theorist P Verbeek suggests that, as users, we should educate ourselves better about how we are influenced. But in practice the time and headspace this would require is a luxury that many people can’t afford.
I have less of a problem with the tools themselves, than with the intention behind them. Digital space has become an exploitative 'wild west', like a city without zoning laws, and I agree with Tristan Harris that more should be done to protect us from being manipulated unfairly. This might include regulation balance the market in favour of more 'humane' companies, for example by requiring better data protection and transparency (GDPR is a start…). But we can't rely on regulation alone for what is essentially a culture change. A ‘humane’ approach to digital technology requires a transformational rethink of business models, encompassing branding and product design. This is an exciting creative challenge, and a huge opportunity.
This is a topic close to my heart, as you may have noticed. My 2017 project, Treasure, took a user-centred design approach to identifying and serving long-term user interests over short-term exploitation, in the context of personal finance. In a future post, I may explore some of the organisations and projects I think are moving in this direction, and what a humane internet might look like. The Mozilla Foundation, the EFF, the Centre for Humane Technology, and the customer commons initiative are all doing some cool stuff in this space, and I look forward to participating in this conversation as an active part of my practice.
On a sailing boat, you feel exposed, vulnerable even, but also closely connected to the forces of nature. You feel the sea and the wind through the tension and vibration of the ropes, the pressure on the tiller and the movements of the deck. The boat acts as a vector for this connection, channelling streams of tactile information to you, and enabling you to act through it and negotiate your way across the water.
In the past, I’ve discussed how designed objects represent a kind of ‘interface’ between ourselves and the world. This interface, and the way it is designed, can determine our sense of connectedness with things outside ourselves. It can make environmental sensations feel close and immediate, or push them into irrelevance.
I've now completed two university degrees - the first in materials science and the second focused on design. In my undergrad, materials selection was touched on briefly, but the general impression was that materials selection can – ‘should’ – be reduced to a mathematical process. Having characterised all the available materials and quantified their various properties, you can just plot a chart, or feed the information into an algorithm, and find the material that has the right balance of mechanical, electrical, thermal (etc.) properties, and cost.
One thing rarely mentioned in the science/engineering context is that our subjective experience of materials forms a fundamental part of human experience. A wooden ship will be profoundly different to sail compared to a modern fibre-glass yacht with aluminium mast, while a large steel cruise ship is designed to eliminate the sensations of the sea for its passengers.
As materials scientists keep creating new materials, designers have an ever-widening palate from which to craft experiences. Of course, each new material came into use for reasons that may not have included the experience: perhaps it let people do things they couldn’t before, or improved safety, or was cheaper. My point is simply that materials mediate experience - and we now have choices.
With my infiltration of the design world, it seems clear that the emphasis over here is much more on these subjective properties of materials. I agree with this view, to a point – aesthetics and subjective experiences clearly matter. When you interact with a product, your opinions of how it looks and feels all get rolled into your overall ‘product experience’. The way you feel when you use a product can affect how easy you find it to use, whether you’re likely to use it again, and so on – and this is all partly because of materials. When Apple wanted the original iPhone to be ‘seductive’, they weren’t just talking about the digital interface, but also the sleek mirror finish, the curved steel body, the expensive-seeming weight, and so on. The physicality and materiality of the object speaks the same language as Apple’s digital experience. (This sense of coherence is precisely what makes Apple products so pleasant to use.)
I think of materials as actors, performing different roles in an experience – which can be a product or environment. Like actors, a material can play a range of roles, but they still have a kind of underlying character or personality. Woods are usually characterised as warm, organic and pliable, and metals as cold, industrial and formal: these sensory associations derive from physical properties like mechanical elasticity, density, thermal conductance and colour, and so are remarkably consistent across cultures. I think these associations still hold when the material in question is structural rather than aesthetic, as with the ships mentioned above (the only difference being that we sense the material’s character with apparatus other than our eyes).
There are exceptions to the general classifications – the formality of ebony, the warmth of copper – but if anything, these prove the point that each material has a distinctive character. And just like actors, materials can be typecast (yet another glass-and-steel skyscraper) and mimicked (concrete apartment blocks with a veneer of brick to look traditional).
I find the old modernist notion of ‘truth to material’ fascinating – although of course today every rule exists to be broken, every philosophy subverted. Materials are an essential part of all physical and digital objects, and materials selection is a vast topic. Stacks of books have been written about it; there are several large companies specialising in it. I’m not proposing a grand unified theory: quite the opposite, I think materials selection is a practice, a modern craft.
Decisions about physical form and materiality can’t be considered in isolation to each other. Materials awareness, and intelligent materials selection decisions are critical to the success of any design – its popularity, longevity, and afterlife – and thus to humanity as a whole.