Home » Blog » Disinfo and Elections in the Global Majority

Disinfo and Elections in the Global Majority

I’m just back from a very quick trip to Rio de Janiero to take part in a conference put together by dear friend and colleague Jonathan Ong, called Disinformation and Elections in the Global Majority. Hosted at the Pontifical Catholic University of Rio (wonderfully enough, shortened to PUC, pronounced “Pookie”) and co-organized by Jonathan and JM Lanuza at UMass Amherst and Marcelo Alvez at PUC, the event began as a conversation between Filipino and Brazilian scholars about the 2022 elections. It expanded to a much broader dialog about what the world could learn from the experiences of democracies in the Global South with disinformation during elections. The gathering brought together activists, journalists and scholars from India, Moldova, South Africa, Indonesia, Myanmar and other countries.

Opening panel for Disinfo and Elections in the Global Majority

At the heart of the gathering is a question: why did the elections in Brazil and the Philippines turn out so differently? Both countries have a long history of colonial domination, independence, and advances and retreats of democracy. In the 21st century, both nations were promising anchors of democracy in their regions, until populist autocrats took power in the late 2010s. The Philippines elected Rodrigo Duterte and Brazil elected Jair Bolsonaro, both authoritarians in the Viktor Orban/Donald Trump model, both using social media to activate supporters and harness divisive cultural issues and fear into popular support.

Both countries held elections in 2022 which served as referenda on authoritarian leadership. In Brazil, Bolsonaro was narrowly defeated by Lula, the former leftist president who came with his own baggage of corruption scandals. My Brazilian friends took pains to explain that Brazil came much closer to a coup than most outside observers understand. Military police blocked buses from bringing voters into polling places known to be Lula strongholds. Motivated by social media and a pro-Bolsonaro radio station, citizens stormed government buildings on January 8, 2023, seeking an overturn of election results. Ultimately, Lula prevailed, winning election by roughly 2 million votes, a slim margin.

The situation in the Philippines was utterly different. Duterte’s chosen successor was Bong Bong Marcos, son of dictator Ferdinand Marcos. His running mate was Duterte’s daughter, making the idea of familiar succession completely clear. Opposition in the Philippines didn’t get behind a single candidate as they did in Brazil, and the leading opposition candidate lost in a landslide to Marcos.

Despite the different outcomes, there were distinct similarities in the elections. In both countries, a narrative emerged – first online, then flowing into mainstream media – that electoral institutions were not to be trusted, that the populist candidates were fighting against the corruption of elites. In Brazil, a narrative emerged around nostalgia for military leadership, just the thing to save the nation from a slide into Communism under Lula. In the Philippines, Marcos managed to rebrand his famously corrupt parents into victims of elite persecution. And in both countries, elections were set up to be distrusted. This position has no downsides for candidates who position themselves as outsiders: if they win, it’s because they overcame the corrupt system, while if they lose, it’s because the election was stolen.

These narratives emerge online, from hired guns and from individual partisans. The language for the narratives online differs between countries. Brazil has firm policies, set by the electoral court, allowing them to order removal of disinformation, and so that’s the term of choice in the Brazilian context. Jonathan and team see the problem as broader than disinformation, including different forms of propaganda, and refer to this online speech as “influence operations”. I like this term, as it includes a particularly tricky type of speech: participatory propaganda, where partisans spread and shape narratives on their own, not necessarily because they are receiving commands as part of a “cyber army”.

The Brazilians are rightly proud of having defended their democracy, and point to the judiciary as the key institution. The electoral court made rulings on disinformation and ordered platforms to take down content… and they did. In the Philippines, civil society organizations pressured platforms behind the scenes to control these influence operations, and got fewer results. So one hypothesis is that controlling platform behavior is a key step in defending free elections.

This feels a bit strange as an American, as we tend to be pretty sensitive to any restrictions on political speech – I asked variants of the same question to several Brazilian friends: Is it okay to restrict online speech if you’re not restricting the press? The answers surprised me: we should be restricting the press. Certain restrictions on media are necessary to ensure that you can have an environment conducive to holding fair elections.

It’s understandable that people around the world – particularly in the Global South – would want to see more control over the large online platforms. These companies make money in countries like Brazil and the Philippines, but have – at best – a limited knowledge of politics and culture on the ground, and apparently limited interest in complying with local laws. But there’s a problem with pushing for strong local government regulation of the internet: India. The Modi administration has aggressively controlling online speech environments, flooding platforms with demands to take down content the government finds offensive. The justification for these takedowns? THe information they challenge is “disinformation”, banned under local laws. India is a massive market and particularly spineless “free speech” advocates – i.e., Elon Musk – have been extremely receptive to demands from the government of a country that is a massive market for Twitter and his other companies.

At the same time that the Modi government is silencing critical speech, India has developed profound disinformation problems, in support of government narratives. A set of Indian activists, journalists and scholars (who’ve asked for remarks to be Chatham House to protect themselves and their families) presented research on the phenomenon of the “vox pop”, viral videos in which “ordinary Indians” share points of view consonant with government narratives. Vox pops allow the BJP to promote divisive and false narratives like “love jihad”, the idea that Indian Muslims are seeking to trap Hindu women in marriage to turn the nation into an Islamic state: one speaker describes this as the “Indian Great Replacement Theory”. The same actors appear again and again in these videos, but that’s a hard pattern to see unless you’re collecting hundreds of these. For many Indians, they encounter these influence campaigns through WhatsApp groups – a family member will post a video as a daily status update and encourage others to share it as well.

Even if platforms were inclined to combat this sort of viral disinformation, the researchers argue that they might not be able to do so because of the challenge of cultural context. One researcher shows a popular Hindutva meme: people changing their job status to “cauliflower farmer”. This is a reference to the Logain Massacre in Bihar, India 1989, where 119 Muslims were killed and buried, and their corpses hidden by planting cauliflower over their graves, disguising the mass grave. To recognize this change in job status as a form of hate speech and celebration of mass murder, a content moderator would need to understand Indian history quite well. Needless to say, the cauliflower farmer posts aren’t being identified and sent to platforms by the Modi government, while many innocuous posts supporting Kashmiri or Sikh rights are.

The India situation – a powerful government whose institutions are being used to silence online speech, rather than combatting all hateful influence operations – is one that should serve as a caution to anyone who sees regulating social media platforms as a silver bullet for mis/disinformation. India is one of the world’s leaders in requesting takedowns of content, and many of these takedowns are weaponized against political or social opposition. But that’s not the only situation in which government regulation of US-based social media is an unrealistic approach to a complex problem.

A speaker representing a fact checking organization from Moldova presented the audience with a ferocious problem. Moldova was part of the former Soviet Union, and its breakaway eastern region of Transnistria includes Russian-speaking citizens who want closer ties with Russia. Most Moldovans speak Romanian and the nation is split politically between those who seek a future in the European Union and those who look to Russia. But it’s not a level playing field for a debate between the EU and Russia: according to our speaker, Russia invests 0.5% of Moldova’s GDP annually in advertising and disinformation campaigns in Moldova.

Additionally, if you’re a Russian speaker in Moldova, you almost certainly use Russian-hosted platforms like VK as major sources of information – controlling the spread of influence operations on Facebook is less important than on platforms like Telegram, where Russia may have a significant influence on platform operations. This problem is even more profound in a country like Taiwan, which is both a target for influence operations from China and where language consonance means virtually everyone uses tools like Weibo. Taiwan is extremely unlikely to have success getting government-influenced platforms to take down disinformation.

Our Moldovan friend has some good news: pre-bunking appears to work. This is the practice of explaining the narratives an attacker is likely to use before those narratives emerge. For instance, if the Biden administration has any hope of preventing a repeat of January 6, 2021, it will be by explaining loudly that Trump and allies are falsely claiming election interference because they can’t win a free and fair election. Successful prebunking in Moldova – as seen by the pro-West party winning a large share of 2023 local elections despite massive election spending from Russia – required making electoral information funny and viral. Unfortunately there’s no easy formula to guarantee that civic information can be funny and viral, and also no guarantee that pro-West forces will remain allies of pro-democracy movements once they’ve secured power.

I felt most hopeful after a presentation by Tai Nalon of Aos Fatos, who has steered her fact checking organization into a major producer of innovative disinfo fighting tools. Disinfo in Brazil often spreads via WhatsApp, which is an especially challenging platform on which to combat influence operations – WhatsApp groups are private, which makes studying them an ethical challenge (you can ask to subscribe to a group, but you are likely to be turned down, or you can try to join under false premises, which many IRBs will forbid) as well as a technical one (WhatsApp is encrypted by default, meaning you can’t monitor conversations as a researcher without being invited to take part).

Aos Fatos has deployed various fact-checking robots in WhatsApp and Telegram channels. Their most recent version, called Fatima, uses GPT-4 to parse requests for information, which are matched against Aos Fatos’s database of claims and debunking information. After retrieving human-vetted and constructed information, Fatima uses GPT-4 to provide a conversational response to the user. Because Fatima is essentially a search engine of Aos Fatos’s carefully produced debunking material, it should be much more resistant to hallucination than simply unleashing ChatGPT on a Telegram channel to perform debunks.

Fatima is one of several impressive tools Aos Fatos has produced – they have a lovely “disinfo tracker” called Radar that runs a list of nationalist and extremist keywords against various social media APIs to track how influence narratives are trending over time. (Unfortunately, like everyone else, they are suffering from the closure of social media APIs necessary to do this work.) And Golpeflix collects video and images of the attempted coup in January 2023. Even for folks like me who firmly believe that tools alone won’t conquer well-financed disinfo campaigns, the capabilities and appearance of Aos Fatos’s tools set a very high bar for what’s possible.

I am, as a rule, somewhat skeptical that mis/disinfo is a central problem for the defense of democracy in a digital world. I think Jonathan Ong’s “influence operations” formulation is much stronger, as it includes a recognition that propaganda, rather than disinfo per se is the key problem. But even broader is the problem of conflicting, irreconcilable narratives – the danger in the US is that Trump and Biden supporters are so separated in their understanding of the world that there’s no longer a set of common facts they can agree on. The tactic of splitting reality into two or more pieces – which several speakers linked to Steve Bannon, but which likely has roots in Putin’s Russia – is becoming pervasive around the world. Finding ways to fight for an understandable reality and way people around the world can engage in democratic decisionmaking is a worthy cause and good reason to get on an airplane, if even for a very brief visit.

Needless to say, I am beyond glad that I participated in the conference. At the same time, I increasingly relate to Danny Glover in Lethal Weapon – I am getting too old for this shit, specifically the “three days on another continent” travel that teaching requires. Glover was 41 when that film was made, playing 50. I sometimes feel like I am 50, playing 30. At the same time, I hope I never get too old for this shit – it’s wonderful to put everything aside for a few days and learn something new from people I otherwise would be unlikely to meet. Thank god I’m not too old for this shit.

Leave a Reply

Your email address will not be published. Required fields are marked *