I gave a keynote at PyCon UK recently – it was mostly about the book “Seeing Like A State” and what software developers can learn from it about our effect on the world.
I’ve been meaning edit it up into a blog post, and totally failing to get around to it, so in lieu of that, here’s my almost entirely unedited script – it’s not that close to the version I actually got up on stage and said, because I saw 800 people looking at me and panicked and all the words went out of my head (apparently this was not at all obvious to people), but the general themes of the two are the same and neither is strictly better than the other – if you prefer text like a sensible person, read this post. If you prefer video, the talk is supposedly pretty good based on the number of people who have said nice things to me about it (I haven’t been able to bear to watch it yet).
The original slides are available here (warning: Don’t load on mobile data. They’re kinda huge). I’ve inserted a couple of the slide images into the post where the words don’t make sense without the accompanying image, but otherwise decided not to clutter the text with images (read: I was too lazy).
Hi, I’m David MacIver. I’m here to talk to you today about the ways which we, as software developers, shape the world, whether we want to or not.
This is a talk about consequences. Most, maybe all, of you are good people. The Python community is great, but I’d be saying that anywhere. Most people are basically good, even though it doesn’t look that way sometimes. But unless you know about what effect your actions have, all the good intentions in the world won’t help you, and good people can still make the world a worse place. I’m going to show you some of the ways that I think we’re currently doing that.
The tool I’m going to use to do this is cultural anthropology: The study of differences and similarities between different cultures and societies. I’m not a cultural anthropologist. I’ve never even taken a class on it, I’ve just read a couple of books. But I wish I’d read those books earlier, and I’d like to share with you some of the important lessons for software development that I’ve drawn from them.
In particular I’d like to talk to you about the work of James C. Scott, and his book “Seeing like a state”. Seeing like a state is about the failure modes of totalitarian regimes, and other attempts to order human societies, which are surprisingly similar to some of the failure modes of software projects. I do recommend reading the book. If you’re like me and not that used to social science writing, it’s a bit of a heavy read, but it’s worth doing. But for now, I’ll highlight what I think are the important points.
Binary Tree by Derrick Coetzee
Before I talk about totalitarian states, I’d like to talk about trees. If you’re a computer scientist, or have had an unfortunate developer job interview recently, a tree is probably something like this. It has branches and leaves, and not much else.
If you’re anyone else, a tree is rather different. It’s a large living organism. It has leaves and branches, sure, but it also has a lot of other context and content. It provides shade, maybe fruit, it has a complex root system. It’s the center of its own little ecosystem, providing shelter and food for birds, insects, and other animals. Compared to the computer scientist’s view of a tree it’s almost infinitely complicated.
But there’s another simplifying view of a tree we could have taken, which is that of the professional forester. A tree isn’t a complex living organism, it’s just potential wood. The context is no longer relevant, all we really care about the numbers – it costs this much to produce this amount of this grade of wood and, ultimately, this amount of money when you sell the wood.
This is a very profitable view of a tree, but it runs into some difficulties. If you look at a forest, it’s complicated. You’ve got lots of different types of trees. Some of them are useful, some of them are not – not all wood is really saleable, some trees are new and still need time to grow, trees are not lined up with each other so you have to navigate around ones you didn’t want. As well as the difficulty of harvesting, this also creates difficulty measuring – even counting the trees is hard because of this complexity, let alone more detailed accounting of when and what type of wood will be ready, so how can you possibly predict how much wood you’re going to harvest and thus plan around what profit you’re going to make? Particularly a couple of hundred years ago when wood was the basis of a huge proportion of the national economy, this was a big deal. We have a simple view of the outcomes we want, but the complex nature of reality fights back at our attempts to achieve that. So what are we going to do?
Well, we simplify the forest. If the difficulty in achieving our simple goals is that reality is too complicated, we make the reality simpler. As we cut down the forest, we replant it with easy to manage trees in easy to manage lines. We divide it into regions where all of the trees are of the same age. Now we have a relatively constant amount of wood per unit of area, and we can simply just log an entire region at once, and now our profits become predictable and, most importantly, high.
James Scott talks about this sort of thing as “legibility”. The unmanaged forest is illegible – we literally cannot read it, because it has far more complexity than we can possibly hope to handle – while, in contrast, the managed forest is legible – we’ve reshaped its world to be expressible in a small number of variables – basically just the land area, and the number of regions we’ve divided it into. The illegible world is unmanageable, while the legible world is manageable, and we can control it by adjusting a small number of parameters.
In a technical sense, legibility lets us turn our control over reality into optimisation problems. We have some small number of variables, and an outcome we want to optimise for, so we simply reshape the world by finding the values of those variables that maximize that outcome – our profits. And this works great – we have our new simple refined world, and we maximize our profit. Everyone is happy.
Oh, sure, there are all those other people who were using the forest who might not be entirely happy. The illegible natural forest contains fruit for gathering, brush to collect for firewood, animals for hunting, and a dozen other uses all of which are missing from our legible managed forest. Why? Well because those didn’t affect our profit. The natural behaviour of optimisation processes is to destroy everything in their path that isn’t deliberately preserved or directly required for their outcome. If the other use cases didn’t result in profit for us, they’re at best distractions or at worst impediments. Either way we get rid of them. But those only matter to the little people, so who cares? We’re doing great, and we’re making lots of money.
At least, for about eighty years, at which point all of the trees start dying. This really happened. These days, we’re better a bit better at forest management, and have figured out more of which complexity is necessary and which we can safely ignore, but in early scientific forestry, about 200 years ago in Germany, they learned the hard way that a lot of things they had thought weren’t important really were. There was an entire complex ecological cycle that they’d ignored, and they got away with it for about 80 years because they had a lot of high quality soil left over from that period that they could basically strip mine for a while. But the health of the forest deteriorated over time as the soil got worse, and eventually the trees were unhealthy enough that they started getting sick. And because all of the trees were the same, when one got sick it spread like wildfire to the others. They called it Waldsterben – forest death.
The problem that the German scientific foresters ran into is that complex, natural, systems are often robust in ways that simple, optimised systems are not. They’ve evolved over time, with lots of fiddly little details that have occurred locally to adapt to and patch over problems. Much of that illegibility turns out not to be accidental complexity, but instead the adaptation that was required to make the system work at all. That’s not to say all complexity is necessary, or that there isn’t a simpler system that also works, but if the complexity is there, chances are we can’t just remove it without replacing it with something else and assume the system will keep working, even if it might look like it does for a while.
This isn’t actually a talk about trees, but it is a talk about complexity, and about simplification. And it’s a talk about what happens when we apply this kind of simplification process to people. Because it turns out that people are even more complicated than trees, and we have a long history of trying to fix that, to take complex, messy systems of people and produce nice, simple, well behaved social orders that follow straightforward rules.
This is what James Scott calls Authoritarian High-Modernism – the desire to force people to fit into some rational vision of the world. Often this is done for entirely virtuous reasons – many authoritarian high-modernist projects are utopian in nature – we want everyone to be happy and well fed and fulfilled in their lives. Often they are less virtuous – totalitarian regimes love forcing people into their desired mould. But virtuous or not, they often fail in the same way that early scientific forestry did. Seeing like a state has a bunch of good examples of this. I won’t go into them in detail, but here’s a few.
Portland Street, Southampton, England, by Gary Burt
An amusing example is buildings like this. Have you seen these? Do you know why there are these bricked up windows? Well it’s because of window taxes. A while back, income tax was very unpopular. Depending on who you ask, maybe it still is, but it was even more so back then. But the government wanted to extract money from its citizens. What could they do? Well, they could tax where people live by size – rich people live in bigger buildings – but houses are often irregularly shaped, so measuring the size of the house is hard, but there’s a nice, simple,convenient proxy for it – the number of windows. So this is where windows taxes come from – take complex, messy, realities of wealth and pick a simple proxy for it, you pick a simple proxy for that, and and you end up taxing the number of windows. Of course what happens is that people brick up their windows to save on taxes. And then suffer health problems from lack of natural light and proper ventilation in their lives, which is less funny, but so it goes.
Another very classic example that also comes from taxation is the early history of the Cadastral, or land-use, map. We want to tax land-use, so we need to know who owns the land. So we create these detailed land-use maps which say who owns what, and we tax them accordingly. This seems very straight forward, right? But in a traditional village this is nonsense. Most land isn’t owned by any single person – there are complex systems of shared usage rights. You might have commons on which anyone can graze their animals, but where certain fruit trees are owned, but everyone has the rights to use fallen fruit. It’s not that there aren’t notions of ownership per se, but they’re very fine grained and contextual, and they shift according to a complex mix of circumstance and need. The state doesn’t care. These complex shared ownerships are illegible, so we force people to conform instead to the legible idea of single people or families owning each piece of land. This is where a lot of modern notions of ownership come from by the way – the state created them so they could collect more tax.
And of course we have the soviet union’s program of farm collectivization, which has the state pushing things in entirely the opposite direction. People were operating small family owned farms, which were adapted to their local conditions and what grew well where they were. A lot of it was subsistence farming, particularly in lean times – when you had excess, you sold it. When you didn’t, you lived off the land. This was hard to manage if you’re a state who wants to appropriate large quantities of food to feed your army and decide who is deserving and who gets what. So they forcibly moved everyone to work on large, collective, farms which grew what the state wanted, typically in large fields of monocultures that ignored the local conditions. From a perspective of producing enough food, this worked terribly. The large, collectivized, farms, produced less food less reliably than the more distributed, adapted, local farms. The result was famine which killed millions. But from the point of view of making the food supply legible, and allowing the state to control it, the system worked great, and the soviets weren’t exactly shy about killing millions of people, so the collectivization program was largely considered a success by them, though it did eventually slow and stop before they converted every farm.
But there’s another, more modern, example of all of these patterns. We have met the authoritarians and they are us. Tech may not look much like a state, even ignoring its strongly libertarian bent, but it has many of the same properties and problems, and every tech company is engaged in much the same goal as these states were: Making the world legible in order to increase profit.
Every company does this to some degree, but software is intrinsically a force for legibility. A piece of software has some representation of the part of the world that it interacts with, boiling it down to the small number of variables that it needs to deal with. We don’t necessarily make people conform to that vision, but we don’t have to – as we saw with the windows, people will shape themselves in response to the incentives we give them,as long as we are able to reward compliance with our vision or punish deviance from it..
When you hear tech companies talk about disruption, legibility is at the heart of what we’re doing. We talk about efficiency – taking these slow, inefficient, legacy industries and replacing them with our new, sleek, streamlined versions based on software. But that efficiency comes mostly from legibility – it’s not that we’ve got some magic wand that makes everything better, it’s that we’ve reduced the world to the small subset of it that we think of as the important bits, and discarded the old, illegible, reality as unimportant.
And that legibility we impose often maps very badly to the actual complexity of the world. You only have to look at the endless stream of falsehoods programmers believe articles to get a sense of how much of the world’s complexity we’re ignoring. It’s not just programmers of course – if anything the rest of the company is typically worse – but we’re still pretty bad. We believe falsehoods about names, but also gender, addresses, time, and many more.
This probably still feels like it’s not a huge problem. Companies are still not states. We’re not forcing things on anyone, right? If you don’t use our software, nobody is going to kick down your door and make you. Much of the role of the state is to hold a monopoly on the legitimate use of physical force, and we don’t have access to that. We like to pretend makes some sort of moral difference. We’re just giving people things that they want, not forcing them to obey us.
Unfortunately, that is a fundamental misunderstanding of the nature of power. Mickey Mouse, despite his history of complicity in US racism, has never held a gun to anyone’s head and forced them to do his bidding, outside of a cartoon anyway. Nevertheless he is almost single-handedly responsible for reshaping US copyright law, and by extension copyright law across most of the world. When Mickey Mouse is in danger of going out of copyright, US copyright law mysteriously extends the length of time after the creator’s death that works stay in copyright. We now live in a period of eternal copyright, largely on the strength of the fact that kids like Mickey Mouse.
This is what’s called Soft Power. Conventional ideas of power are derived from coercion – you make someone do what you want – while soft power is power that you derive instead from appeal – People want to do what you want. There are a variety of routes to soft power, but there’s one that has been particularly effective for colonising forces, the early state, and software companies. It goes like this.
First you make them want what you have, then you make them need it.
The trick is to to basically ease people in – you give them a hook that makes your stuff appealing, and then once they’re use to it they can’t do without. Either because it makes their life so much better, or because in the new shape of the world doing without it would make their life so much worse. These aren’t the same thing. There are some common patterns for this, but there are three approaches that have seen a lot of success that I’d like to highlight
The first is that you create an addiction. You sell them alcohol, or you sell them heroin. The first one’s free – just a sampler, a gift of friendship. But hey, that was pretty good. Why not have just a little bit more… Modern tech companies are very good at this. There’s a whole other talk you could give about addictive behaviours in modern software design. But, for example, I bet a lot of you find yourselves compulsively checking Twitter. You might not want to – you might even want to quit it entirely – but the habit is there. I’m certainly in this boat. That’s an addictive behaviour right there, and perhaps it wasn’t deliberately created, but it sure looks like it was.
The second strategy is that you can sell them guns. Arms dealing is great for creating dependency! You get to create an arms race by offering them to both sides, each side buys it for fear that the other one will, and now they have to keep buying from you because you’re the only one who can supply them bullets. Selling advertising and social media strategies to companies works a lot like this.
The third is you can sell them sugar. It’s cheap and delicious! And is probably quite bad for you and certainly takes over your diet, crowding out other more nutritious options. Look at companies who do predatory pricing, like Uber. It’s great – so much cheaper than existing taxis, and way more convenient than public transport, right? Pity they’re going to hike the prices way up when they’ve driven the competition into the ground and want to stop hemorrhaging money.
And we’re going to keep doing this, because this is the logic of the market. If people don’t want and need our product, they’re not going to use it, we’re not going to make money, and your company will fail and be replaced by one with no such qualms. The choice is not whether or not to exert soft power, it’s how and to what end.
I’m making this all sound very bleak, as if the things I’m talking about were uniformly bad. They’re not. Soft power is just influence, and it’s what happens every day as we interact with people. It’s an inevitable part of human life. Legibility is just an intrinsic part of how we come to understand and manipulate the world, and is at the core of most of the technological advancements of the last couple of centuries. Legibility is why we have only a small number of standardised weights and measures instead of a different notion of a pound or a foot for every village.
Without some sort of legible view of the world, nothing resembling modern civilization would be possible and, while modern civilization is not without its faults, on balance I’m much happier for it existing than not.
But civilizations fall as well as rise, and things that seemed like they were a great idea in the short term often end in forest death and famine. Sometimes it turns out that what we were disrupting was our life support system.
And on that cheerily apocalyptic note, I’d like to conclude with some free advice on how we can maybe try to do a bit better on that balancing act. It’s not going to single handedly save the world, but it might make the little corners of it that we’re responsible for better.
My first piece of free advice is this: Richard Stallman was right. Proprietary software is a harbinger of the end times, and an enemy of human flourishing. … don’t worry, I don’t actually expect you to follow this one. Astute observers will notice that I’m actually running Windows on the computer I’m using to show these slides, so I’m certainly not going to demand that you go out and install Linux, excuse me, GNU/Linux, and commit to a world of 100% free software all the time. But I don’t think this point of view is wrong either. As long as the software we use is not under our control, we are being forced to conform to someone else’s idea of the legible world. If we want to empower users, we can only do that with software they can control. Unfortunately I don’t really know how to get there from here, but a good start would be to be better about funding open source.
In contrast, my second piece of advice is one that I really do want you all to follow. Do user research, listen to what people say, and inform your design decisions based on it. If you’re going to be forming a simplified model of the world, at least base it on what’s important to the people who are going to be using your software.
And finally, here’s the middle ground advice that I’d really like you to think about. Stop relying on ads. As the saying goes, if your users aren’t paying for it, they’re not the customer, they’rethe product. The product isn’t a tree, it’s planks. It’s not a person, it’s data. Ads and adtech are one of the most powerful forces for creating a legible society, because they are fundamentally reliant on turning a complex world of people and their interactions into simple lists of numbers, then optimising those numbers to make money. If we don’t want our own human shaped version of forest death, we need to figure out what important complexity we’re destroying, and we need to stop doing that.
And that is all I have to say to you today. I won’t be taking questions, but I will be around for the rest of the conference if you want to come talk to me about any of this. Thank you very much.