Home » Blog » Berkman » On being wrong, or prediction versus analysis

On being wrong, or prediction versus analysis

I had coffee a couple of weeks back with my friend Eli Pariser. Eli’s working on a book about personalization and the ways in which overpersonalization can prevent us from getting thorough, complete or accurate views of the world. From what he’s told me, it sounds great, and like it will mesh with some of the arguments I’ve been making about imaginary cosmopolitanism and the media echo chambers. It’s a good time to raise questions about personalization – as Eli did in an excellent talk at Personal Democracy Forum a few months back – as companies like Facebook are promising an increasingly personalized and filtered world.

That the ability to personalize and filter information in digital media could lead towards increased political polarization is an old one – Cass Sunstein made the case in Republic.com and subsequent books, and supporting or refuting his case has been a popular academic exercise. Gentzkow and Shapiro took a pass at the question recently, and while I had problems with their conclusions, there’s a lot that’s smart in their paper, including their observation that there’s lots more traffic to largely nonpartisan news aggregators than to smaller, highly partisan websites.

I share many of Sunstein’s concerns. Much of my work has been trying to extend his critique about partisanship and ideological isolation beyond left/right politics in the US and towards the question of what information we get from the rest of the world and how it changes us in terms of civic participation. And so I find myself shopping both for good thinking about the future of digital media and for good data about our current usage of media. Lately I’m discovering the thinking and the data I encounter are often in tension.

When we write about the effects of technology, we’re always in tension between predicting what might happen and analyzing what has happened. The temptation is to predict as boldly as possible from the little analysis that exists. The bold predictions lead to retweets, headlines and book contracts – the analysis is useful for little-read academic papers, and always includes more shades of grey than are strictly helpful for bold predictions.

Malcolm Gladwell’s recent essay for the New Yorker on digital activism is prediction cloaked in historical example. His critique of digital activism is based around the big – though ill-supported – idea that traditional activism is based on strong ties and online activism is based on weak ones. Cherry-picking a classic example of successful offline organization based on strong ties, he offers a sweeping dismissal of a field that’s just beginning to emerge.

My colleague Henry Farrell had a helpful reaction to Gladwell’s piece on Crooked Timber. Titled “Blogs, Bullets and Bullshit“, Farrell offers the observation that Gladwell’s piece is “the purest possible distillation of the intellectual-debate-through-duelling-anecdotes that has plagued discussion over the Internet and authoritarian regimes over the last few years.” The title of his post refers to a paper I collaborated with him and others on titled, “Blogs and Bullets: New Media in Contentious Politics” (and no, I’m not responsible for the title) which tries to figure out what, if any effect, new media has had on recent conflicts. Can we answer the question of whether new media is helping challenge authoritarian regimes? Farrell quotes himself/us in response: “The sobering answer is that, fundamentally, no one knows. To this point, little research has sought to estimate the causal effects of new media in a methodologically rigorous fashion, or to gather the rich data needed to establish causal influence. Without rigorous research designs or rich data, partisans of all viewpoints turn to anecdotal evidence and intuition.”

None of this should stop us from speculating about the implications of future technology or trying to mitigate possible bad effects. But it’s a useful reminder that we’d all benefit from some humility and uncertainty, understanding that the future we’re predicting might be coming more slowly than we think, if at all.

I recently dined on crow after arguing with Clay Shirky, who’s a Berkman fellow this year. We’ve restructured our weekly fellows meetings into a seminar, where fellows are asked to frame their work in terms of questions of internet exceptionalism. Clay’s provocation last week asked whether the abundance of journalism in a digital age means we’re sacrificing some of the disciplinary role journalism traditionally has had. In the past, a government official might eschew corruption for fear of ending up on page A1 of the local newspaper – if no one reads page A1, using the internet to flip directly to the information they want deep within the paper, do newspapers lose their ability to act as a check on government power?

My disagreement with Clay wasn’t about his core point, which I think is a valid and very worrying question. I argued that his model for future news was too simple – we’ve not gone from newspapers to search, I argued, but from newspapers to search to social networking as the key gatekeepers for the information we see. The worry now isn’t that we miss what editors would like us to know about because we’re taking deep dives via search into the news we want to know – instead, I argued, we should be worried that we get only the news our friends get. I even had “data” to support my argument – web monitoring company comScore tells us that web users now collectively spend more time on Facebook than they do on Google.

People may spend a lot of time on Facebook, but it’s nowhere near as influential in driving traffic to news sites as Google, according to data collected from UK newspapers’ websites. Google sends 47 times as much traffic to the websites as Facebook. Other content providers are far more influential as well – the Drudge Report drives roughly eight times the traffic Facebook does, and even the BBC – which, like most media outlets, preferentially links to its own content, drives twice as much content as Facebook. Twitter, which is the main tool driving me to newspaper websites, doesn’t register in the top ten referrers, and accounts for 0.14% of referrers to newspaper websites, in one data set.

There are a couple of ways to react to this data. One is to simply admit that I’m wrong, and that social media isn’t yet a significant driver of traffic to newspaper websites – and before we go any further, let me offer a mea culpa. Another is to argue that I’m wrong for now, but right in the long run – a few years back, social media wasn’t driving any traffic, and now it’s driving a whopping 1.5%. Extrapolating from that data, we can safely predict that social media will drive all newspaper traffic by 2020, right? (Here’s a helpful reminder of why that’s wrong: America Online still exists, and still makes 40% of its revenue from dial-up subscriptions.) Change takes lots longer than we expect it to, and even if social media will be a significant shaper of attention to news going forward, it’s likely tiny in comparison to existing gatekeepers and our own, personal search-based gatekeeping.


Heatmap of places mentioned by people I follow on Twitter, via openheatmap.com

My interest in making big claims about the future of media isn’t just in getting retweets and a book deal, though both would be nice. I’m interested in trying to make the future better. If the problem is that social media is leading us towards ideological isolation, there’s possible solutions to that problem. Eli and I spent some time thinking through the idea of using Twitter to broaden the information we encounter. Tools like OpenHeatMap’s Twitter analysis tool are showing me the foci and blind spots in my information universe. I could use lots more news from South America, Australia, China and Japan, it seems. A group like Global Voices could work to identify bridge figures in each of those Twitterspheres, an English-speaker who’d help me understand Japan by retweeting and translating key figures (as friends are doing with Chinese artist Ai Weiwei’s tweets). Eli proposes going a step further – we could analyze our social networks and determine what parts of the world each of us is knowledgeable about – perhaps you can be excused from following West African news as closely if you follow me on Twitter, as there’s a good chance I’m listening to those key bridge figures and filtering the news for you.

All of which is well and good, and potentially important for the future. But analysis of media as it is, rather than as we think it will be, is a good reminder that more Americans get news from local television than the internet and that the problem of the digital age may not be in steering people through a vast diversity of content, but in avoiding overfocusing on a small set of stories that the pack is chasing. This sort of analysis offers a challenge – if we want to address systematic inequities in media attention, it’s not enough to build a new and better media, or to fix the new media that’s emerging. We need to tackle the (possibly intractable) limitations of existing media so long as it’s the media most people encounter.

3 thoughts on “On being wrong, or prediction versus analysis”

  1. Pingback: Minds of All Sizes Think Alike « Disparate

  2. Pingback: On Social Media, the agent of change | Chris Ainslie

  3. Pingback: Réseaux contre hiérarchies, liens faibles contre liens forts | InternetActu

Comments are closed.