Home » Blog » AIxDemocracy: What are the politics of AI?

AIxDemocracy: What are the politics of AI?

Image from AI x Democracy conference at UMass Amherst

Some of my favorite UMass students have founded an organization called the Responsible Tech Coalition. It’s a group of graduate students who want to talk about “public interest technology”, the idea that technology can and should be designed with the public interest in mind. RTC organizes book clubs, brings speakers to campus and, yesterday, pulled off a terrific day-long conference called “AIxDemocracy”. I was honored to be the closing speaker, and took the occasion to try out some ideas I’m wrestling with around whether AI has an embedded politics within it, and what control we might have over these tendencies.

Sometimes I give talks because I know what I think and am working to convey it as well as I can – other times, I’m giving talks to figure out what I think. This is one of those second talks, and I am very much open to the ideas that a) I’m flat-out wrong about how I’m characterizing some tendencies of AI systems or b) that the ability to iterate and fix AI systems goes a very long way towards counterbalancing my concerns. So first, more or less what I said yesterday, followed by some reflections on feedback I got from friends during the conference and after my talk.


My coping strategy for digesting the waves of news about attacks on democracy by the Trump administration has been thinking about how I’m going to teach my fall class here at UMass. The class is called “Defending Democracy in a Digital World”, and I’d like to point out that I haven’t recently renamed it – democracy is inherently a fragile thing and perpetually needs defending.

My class is cross-listed in computer science, communication and public policy, but really it’s a history class. Specifically, it’s a history of how democracies have shaped and been shaped by waves of communication technologies. Democracy depends on a public sphere, the space in which people learn what’s going on in the world, debate what we as citizens and voters should do about it, and organize to take action. Over the course of millennia, we’ve moved the public sphere from a physical space – the Greek agora – to conceptual spaces of information. And it’s in this context that we need to think about democracy and AI.

When Athenians invented democracy, the communications technology was speech, and democratic debate included whoever made their way into the agora, a public space that was simultaneously holy ground, a commercial center, a space to be entertained by performers or elevated by lectures by philosophers. (In our class, we use Astra Taylor’s Democracy May Not Exist But We Will Miss it When It’s Gone to talk about Athenian democracy.)

When colonial Americans sought to reinvent democracy for a new nation, they were confronted with a challenge – the American colonies spanned a thousand miles from Boston to Charleston – there was no physical space in which a citizenry could come together and collectively debate the matters of the day. They built an unprecedented infrastructure for democracy: the postal system. It was designed to be universal, serving the cities and the rural areas, and to be ideologically neutral, carrying newspapers and pamphlets from all political perspectives. The goal of the system was to make newspapers extremely cheap, allowing families in far-flung, lawless frontiers – you know, like Vermont – to know what was happening in the nation’s capital… and the scale of the system was massive. In 1830, 75% of US government jobs, outside of the military, were in the post office. The early US was basically a postal system with a small standing army attached to it. (Here we use Paul Starr’s Creation of the Media and Winifred Gallagher’s How the Post Office Created America)

That version of democracy was partisan and fractious – political parties descended from newspapers, not the other way around. It wasn’t until the next technological revolution – the penny press – that we got the idea of a neutral press. It wasn’t an ideological movement, but a commercial one. Newspaper moguls like Pulitzer and Hearst realized that they could make more money if they sold newspapers in the morning and the evening to both democrats and republicans. Cheap print made newspapers accessible to much broader audiences, and we saw the press serving immigrants with papers in their native languages, and content aimed at women and children, which brought remarkable women like Nelly Bly into the newsroom.

The penny press expanded the range of voices that could participate in democratic dialog. The broadcast age pushed in a different direction: the technical constraints of the medium meant that only a very few voices could be delivered to massive audiences. The intimacy of this medium – a stranger’s voice delivered into the sanctum of the home – changed Americans relationships to their leaders, particularly to FDR who used his “fireside chats” to establish a personal, parasocial relationship with millions of citizens… giving him four terms in office. But the power of a single, personal voice was part of the formula for propaganda: persuasive if misleading speech used to shape public opinion. As Americans began to understand the power of radio and film in the rise of fascism in Europe, we saw efforts to ensure “fairness” in broadcasting, a rough balance between perspectives from two political parties.

We are now roughly thirty years into the next act of technology and democracy, the age of the internet. It’s a vastly more open and participatory age than the broadcast age, for good and for ill. The idea that anyone can command an audience online has allowed social movements like Black Lives Matter and Me Too to gain currency and power… and it’s also allowed figures like Joe Rogan to become powerbrokers. The explosion of information has greatly democratized knowledge, but it’s also allowed people to select their own facts, leading to a very real concern that political divisions in the US are no longer about differing interpretations of a common reality, but between irreconcilable realities themselves.

Let’s posit for a moment that the next age is unfolding, the age of AI. What might we expect a public sphere transformed by AI to mean for democracy?

I’m going to constrain that question by embracing some language proposed by Arvind Narayanan and Sayash Kapoor at Princeton, the authors of an excellent book called AI Snake Oil. Their book is not nearly as hostile towards AI as the title might suggest – it’s helpful in understanding why some areas of AI, like image generation, are developing so quickly, and others are making little if any progress, like prediction of uncommon events. Arvind and Sayash released a paper last week called “AI as Normal Technology”, which is simultaneously a description of how AI is now, a prediction of how it will evolve in the near future and a proposal for how to regulate and live with it.

Their core idea is that while AI may be important and transformative – they offer comparisons to electricity and the Internet as similarly transformative general technologies – it’s not magic. They dismiss both the scenarios where artificial general intelligence makes most human jobs obsolete and necessitates universal basic income and the scenario where superintelligent AIs unleash killer robots to exterminate the planet’s population as unlikely and worthy of less consideration than a scenario where AI is important, but ultimately just another technology.

What does the future of AI and democracy look like if you take scenarios that are fun to think about, but unlikely to happen, off the table? It might look a little like an experience I had last Thursday – I was moderating a panel at the Museum of Science called “Democracy is a Disability Issue”. My panelists included Kim Charlson, the librarian at the Perkins School for the Blind, who is herself blind, and the CIO of the city of Boston, who has the challenge of designing information systems for the city that are accessible to residents with a wide range of disabilities. I expected to hear about three-D printing and braille everywhere, but we mostly talked about ordinary AI: Santi Garces, the CIO, uses generative AI to summarize the formal, legalistic minutes from City Council meetings into 20 word descriptions, which are designed to be easily delivered by screen readers. This is not just good for visually impaired people – they’re vastly more popular as a way of understanding what the council is doing, even for sighted readers. When I asked him about the challenge of making data visualizations into something blind citizens can experience, he explained that his team is now experimenting with chatbots that can help users ask questions about government data sets – and his prediction that these will be wildly popular outside the disability community as well. And as we were having this conversation, it was transcribed near flawlessly by Zoom’s AI… which could also be redirected to a handheld braille device that Kim the librarian could read. That’s ordinary AI, and it’s already pretty damned impressive when we think about the simple but profound problem of ensuring that democracies include the voices of all citizens.

I think the technologies that Eric Gordon has been talking about also fall within the realm of AI as normal technology. One of the hardest problems of democracy since its inception is the problem of listening at scale. From listening to those voices in the agora, to thousands of citizens crashing the Congressional phone system, ensuring that every voice in a democracy is heard has been an unsolved problem. Many of the systems we associate with democracies – voting, polling, petitions, the structure of representation itself – are technologies designed to enable listening at scale. (NB: Eric spoke at the conference, and is finishing a new book for MIT Press on AI and civic listening. I had the honor of reading the proposal and offering some thoughts on it – it’s one of the books I’ve been most looking forward to seeing in print.)

AI promises that we might listen to everyone: the callers to congressional offices can have their opinions summarized into briefings for their representatives. We can listen to people speaking on social media and translate those disparate voices into a complex tapestry of frustrations and hopes. We can seek out the solutions people are proposing and bring the best of them to leaders who might implement them.

But there’s at least as much reason for caution. If AI is good at digesting speech, it’s at least as good at producing it. We are already drowning in a sea of voices, and we are increasingly figuring out whether we should ignore the voices of machines as inauthentic speech, or accept the potential of these tools to allow those marginalized by language, disability or education to participate fully in our debates as citizens.

Assuming that we’re dealing with AI as an ordinary, normal technology like the ones that have previously transformed the public square, should we be optimists or pessimists about what AI will do to democracy? I am nervous, because I am starting to believe that AIs are inherently conservative technologies.

(And I’ve gotten a LOT of pushback on that term. To be clear, I mean it not in a Democrats/Republicans way, but in the way that conservatives warn that we lose essential aspects of our culture and character if we move too far from our own history. Unfortunately, our history is often a past of things we are, in retrospect, happy we have moved beyond. It’s that attachment to the past, even when it’s problematic, that I am trying to evoke with the term “conservative”.)

AIs extrapolate from training data – this process is the same whether they are generating new sentences from millions of texts scraped from the internet, or predicting whether an individual granted bail will reoffend if released. In both cases, systems are likely to inherit the biases of the data they are trained upon. Florida’s notorious COMPAS system identified proportionately more Black defendants as more likely to be rearrested than their white counterparts in major part because Florida’s justice system was more likely to arrest Black people than white people – a system trained on racial injustice replicated those biases in code.

We’re going through a period where we’re starting to hear more voices of people of color, of queer people, indigenous people, people from the Global South as they gain the tools, skills and time to make their voices heard online. But we’re training our AIs on the documents that have been put online so far… including the pirated books and scientific papers in LibGen that Facebook has gulped up to feed their “open” models. The corpus these AIs are trained on is disproportionately filled with documents written by people like me – ageing white dudes who post too much content on the internet. It’s likely that biases within the text we write get reflected in the text that LLMs generate. You’ve heard of the em dash – commonly used in academic literature – being called “the Chat GPT dash”. Basically, a literary convention that was going out of style re-emerges because it’s baked into the language model.

You probably shouldn’t be too worried about the em dash. But the situation gets a bit more concerning when you consider pronouns. In an effort to include non-binary people, “they/them/their” is becoming a more common collective pronoun than “he or she”, much like “he or she” replaced “he/him/his” as a generic pronoun for a person of unknown gender. But they is just making it into the corpus – as we generate text, does our language get pulled back to “he or she”? What other linguistic regressions are we going to find if LLMs take our language back to the authorship of the texts they’ve been trained on? What other biases creep into the texts we’re authoring, the images we’re generating, the decisions we are making? (In the talk, I used some of Mark Graham’s visualizations of contributions to Wikipedia, in which both contributors and content from the Global South are not well represented.)

Perhaps more worrisome is the concentration of power that contemporary AI systems seem to be bringing about. The AIs we currently know how to build require a lot of resources. You need vast sets of data and enormous sets of machines to process it. Many of the companies leading the field are huge, and a few are companies so large – Meta, Google – that the US government is investigating breaking them up.

We’ve seen that social media platforms and search engines, two of the dominant technologies of the internet age, give power to influence speech to whoever controls them. When Elon Musk decides to express an opinion – that we should all support Alternative fur Deutschland – everyone who uses X as a medium for political discussion gets to hear that opinion. What happens when platforms that deliver information confidently, with only the flimsiest of disclaimers that we should be careful in assuming their information is true, start influencing our political discussions?

This is a good question, because this is the road we seem to be on, and it looks like a road that favors conservative plutocracy, a flavor of democracy we’ve become all too familiar with.

And so it’s a good time to remind ourselves of the idea of “technodeterminism”. This is the idea that technologies follow their own internal logics and bring about unavoidable changes in the world. The good news is that the history of technology and democracy suggests that we have a lot of choice. The world got radio in 1912, and different societies used it in radically different ways. In the Soviet Union and in Nazi Germany, it was a tool for propaganda. In the UK, a public broadcaster turned radio into a powerful tool for diversity preserving local languages, providing news from around the world and anchoring political debates in a common set of facts. We get a choice as societies over how these technologies get used. But we have to choose quickly – it took about ten years before the basic models of how radio would unfold in the US, the UK and the Soviet Union were set in place, and the long tail of those decisions are with us still. (For lots more about this, my piece The Case for Digital Public Infrastructure is one place to start.)

Even harder, our intuitions about how technologies will shape societies are not always right. There’s a lot of people – myself included – who believed that the internet would diffuse power and flatten hierarchies, giving us greater exposure to marginalized voices and making concentrated capital less influential. There were even good reasons to believe all those things, all of which turned out to be wrong. And so I don’t want to confidently declare that AI is inherently conservative or plutocratic, because I can imagine scenarios in which the opposite proves to be the case.

What I do feel confident saying is that conversations like the one we’re having today are worth our time. If AI shifts our technological landscape – and it appears that it will – it will likely shift our democratic landscape in ways that are significant, hard to predict, but are also influenceable. Articulating the relationship we want between AI and democracy doesn’t guarantee that we will get it – failing to consider the relationship all but guarantees we will get a relationship we do not want.


My colleague Yuriy Brun spoke earlier in the conference and gave me a great deal to think about on the question of AI systems and bias. One of the many excellent points he made is that it’s easier to iterate with AI systems than it is with human systems: i.e., if both human judges and algorithms like COMPAS are biased against Black defendants, at least with an AI system, you can run multiple versions of the algorithm, tweak them to address biases and see if you can get a more just system. I think that’s both right and interesting: I am curious whether some systems are more easily de-biased than others. My fear is that systems where we simply do not have data – what would language models look like with full Global South participation – may be very hard to debias.

So perhaps my fear is that AIs – unless we carefully and consciously address biases in the ways Yuriy suggests – tend to embed in code existing societal biases. Eric Gordon pushed back on that after I gave the talk, pointing out that it seems to contradict my point that hard technodeterminism is not true – radio has turned out very differently in the UK, US and in totalitarian countries. My intuition is that technologies do have political influences that are tied to their specific affordances, and that we have a (relatively brief) period of influence when technologies are introduced to steer them towards the social values we want them to embed.

Looking forward to other pushback or reflection on this set of ideas as well. And ever so grateful to RTC at UMass and everyone else who made this excellent gathering possible.

Leave a Reply

Your email address will not be published. Required fields are marked *