My friends at the Knight First Amendment Institute at Columbia have just published a new paper from me on the topic of digital public infrastructures. This is an idea I started talking about in an article for the Columbia Journalism Review late last year, and presented at a terrific conference called “The Tech Giants, Monopoly Power, and Public Discourse”.
Panel at Columbia where I presented the abstract of the paper, November 2019
The paper is not a quick read – it’s about 11,000 words – so I’ll offer a quick TL:DR; here:
– Social media is often not very good for us as citizens in a democracy. That shouldn’t surprise us, as it wasn’t designed to be a space for civic discourse – it was designed to capture our attention and our personal data for use in targeting ads.
– If we wanted media that was good for democratic societies, we’d need to build tools expressly designed for those goals.
– Those tools probably won’t make money, and won’t challenge Facebook’s dominance. That’s okay. The current state of the online world is the result of market failure: there’s tools and services we need for a democratic society to function that markets won’t pay for, which means we need to decide to pay for those services via taxpayer dollars or voluntary contributions.
– There’s a lot to be learned from the history of public media – specifically the formation of the BBC in the 1920s and NPR in the 1970s – that should inform our thinking about digital public infrastructure. Specifically, they invite us to think about what we want new technologies to do for us as a society.
– While there are great models for digital public infrastructure in projects like Mozilla and Wikipedia – and arguably, open source software as a whole – there’s lots of key infrastructures we need, including social media platforms designed to encourage discussion between people who disagree with one another; ad networks that focus on context, not surveillance of users; search engines designed for transparency and auditability. We also need a set of tools that help us study the civic, social and psychological effects of these new platforms as well as existing platforms.
– One way we could get there is by taxing surveillant advertising, both as a way of discouraging the business model and raising money. The funds raised could go towards national projects focused on innovation around digital public infrastructures.
That’s the jist of it, though the whole paper includes some great historical tidbits from the 1910s (a phenomenally cool moment in time) and Taylor Swift makes an appearance in the footnotes. So read the whole thing if that sounds like your idea of an enjoyable long read.
Much as I’ve spent the last several years thinking about civic media and the ways making and disseminating media can be a way of making social change – my new book, tentatively titled “Mistrust” comes out this fall and provides an overview of that work – I’m hoping digital public infrastructure will be a major focus of my work for the next five to ten years. I’m teaching a new course this spring at MIT called “Fixing Social Media”, which is an attempt to get some of the smart folks in and around Cambridge thinking about what better models for social media might be. And I’m in the early stages of planning a conference to convene some of the remarkable people out there trying new models for building digital platforms.
Some early reactions to the paper have commented on its optimism. I feel oddly defensive about that word. In the community of folks who study the internet and society, optimism is often seen as a naïve and insufficiently critical stance. Indeed, some of the best work in our field is profoundly critical of existing systems It’s my hope that this criticism informs and improves new work in the world of technology. I hope that anyone designing technologies for government services reads Virginia Eubanks, that anyone designing algorithmic decisionmaking systems reads Cathy O’Neil, that anyone working on moderation reads Mary Gray and Siddarth Suri. But I also hope that people keep designing new systems, rather than accepting the bad, broken ones we’re stuck with today.
One of the great benefits of teaching at an engineering school for the past eight years has been the inexhaustable energy of young people convinced that their energy and expertise can change the world. As someone who teaches about the negative social and environmental consequences of technologies, I often feel like my work in complicating people’s hopes for technology is the process of crushing people’s dreams. And honestly, that sort of imagination – tempered by the critical lessons we’ve learned thus far about digital media – is what we need to work towards futures better than the dystopian, Black Mirror ones we too often seem to be living through these days.
So, inasmuch as imagining futures beyond our crappy present is optimism, I’m guilty as charged. But there’s nothing wrong with working to imagine and build better systems so long as we understand that what we build won’t necessarily be better because it’s new, and almost certainly won’t work in all the ways we expect. Core to the argument of this paper is that we need to recognize that the ways the world works today are not inevitable, that the realities we face are the product of political and economic systems, and that those systems won’t change without a conscious effort to put something better in that place.
Looking forward to your thoughts, reactions, criticisms and imaginings, optimistic or otherwise.