David Weinberger has an intriguing post up today about the “Fallacy of Examples“. He’s reacting to a column from Nick Kristof in the New York Times titled “The Luckiest Girl“, which recounts the story of Beatrice Biira, a young woman from Uganda whose improbable journey through Connecticut College began with the donation of a goat to her family through Heifer International.
David finds the story moving – how could you not! – but points out that Biira’s amazing journey is hardly a typical outcome of livestock donation programs. Indeed, the reason Kristof is telling it is that it’s so remarkable. And that may be something of a problem:
I’ve noticed in business writing in particular the frequency of what we can call the Fallacy of Examples (a type of Fallacy of Hasty Generalization). You read some story about a successful CEO as if we should learn from his (yes, usually it’s a him) example. But we are struck by examples frequently because they’re exceptional. As exceptions, examples are the last thing you want to learn from.
Not always, though. Sometimes examples are typical. That’s different. The trick is determining which are which.
The problem of deciding whether an example is typical or exceptional struck me as resonant with Clay Shirky’s new (brilliant, must-read, go buy it now) book Here Comes Everybody. Throughout the book, Clay points out that online communities tend to experience a power-law (Pareto) distribution of participation. If you attempt to generalize about the group as a whole from the most prolific participants, you’re going to misunderstand what’s going on.
This is a predictable misunderstanding – we appear to have a tendency to assume that people we encounter are distributed on a bell curve. Fly into Amsterdam and you’ll notice that there are a lot of tall people around. Spend a day or two and you’ll likely conclude that Dutch people are tall, significantly taller than Americans. This turns out to be true – Dutch people are now roughly two inches taller than their American counterparts, likely due to a better diet and excellent state-subsidized healthcare – your extrapolation from a few data points is a pretty accurate one.
Try a different experiment – watch some American TV and try to extrapolate the bell curve of body type in the US. You’re going to get it wrong, and you’re going to feel fat, no matter how skinny you happen to be. People on American television aren’t a bell curve distribution in term of weight – they’re way, way out on an extreme. Media critics suggest that the relentless repetition of images of underweight actresses has a negative impact on young women, leading them to aspire to extreme body types.
Here’s the thing – it’s lots easier to write about extreme examples rather than median ones. (It’s probably easier to watch extremely thin people on TV than ones of median weight as well.) Stories of prolific wikipedians, alpha bloggers or brilliant flickr photographers are more interesting than stories about someone who set up a LiveJournal, posted five times then gave up… which is lots more typical. And Biira’s story is far more compelling than the story of a girl whose family got a goat, and is slightly better fed than the median Ugandan, but who didn’t get to go to school. This, unfortunately, is probably closer to the median effect of livestock donation – not a bad thing, by any means, but not wholly transformative.
The answer to the Fallacy of Examples is not to stop giving examples. Human beings need stories to be interested in issues – that appears to be how we’re wired to take in information. Joi Ito, writing about the recent Global Voices summit, talks about how personal stories can help solve “the caring problem”, making international incidents relavent to audiences who might not care about this news otherwise. Kristof needs to tell us about Biira to get us interested in livestock donation – we’re not going to pay attention without a human story to hold onto.
(Indeed, some critics point out that livestock donation is a form of storytelling as well. Your $120 isn’t buying a goat – it’s a way of getting you to donate to an agricultural charity which will use your money to provision livestock, but also to pay staff salaries, fundraising expenses, etc. The story of giving a goat to a poor family convinces you to give, and perhaps to give more than you otherwise would.)
The solution may be to try to contextualize the story – is the example given an ordinary or an extraordinary one? Kristof signals this with his title, making it clear that Biira is an extraordinary case. But the story would probably be a fairer one with a more representative, median example, offered as a contrast. If you buy a goat for a Ugandan child, you’re probably not going to send a young woman to college… but you just might. It’s hard for me to blame Kristof for telling this amazing story, but it makes me wonder how many unconcious and inaccurate generalizations I’m making every day, looking at extremes and unconciously assuming they’re medians.
The purpose of a story is not to represent but to illustrate. You can use stories not just get people to care about a thing that may or may not be common, but to illustrate that something *can* be, or even to illustrate a worry you have about something that *might* be – without any claim that such a thing has ever happened in reality, or perhaps something that has happened in the past with no claim that it will happen in the future. It’s hard for me to think of storytelling as “fair” in this way, because anecdotes are not data. Fairness is not so much in the choice of stories to show a medium, as it is in placing the stories you do tell in the right context.
BTW, one of the biggest “fallacy of examples” problems in current American culture, I think, is the story of a “ticking time-bomb” and “terrorist who knows how to stop it and will tell you only if you torture him/her”, to justify torture. This is a story of a thing that, AFAIK, has never happened, and is not likely to ever happen.
Note that what you discuss here underlies much of my critique of blog-evangelism – that many of the people involved build businesses on the marketing technique of presenting extreme successes in a way that will be implicitly read as if they were typical, but the words stating that explicitly are not there. So the targets fall prey to the implication, while the hypester blog-evangelist maintains what qualifies as plausible deniability since there was no explicit statement (and often then usually personally attacks the target). I get flak for saying it’s an extremely cruel and exploitative business, as that analysis is not popular with high attention bloggers.
Ethan, this touches on something that has been bothering me lately, articulated both by Chris Anderson’s “End of Theory” article (sorry–don’t know how to hyperlink in your comment field) and by the “Is the Tipping Point Toast?” controversy between Duncan Watts and Malcolm Gladwell (and, now that I think about it, Paul Ormerod’s “Why Most Things Fail”). Basically the implication of all three is that things just happen and humans invent stories to explain them, in the belief that we can somehow control or at least predict the next events that happen. Events are distributed kurtotically, with most being high frequency, low impact, but some few being very low frequency but really high impact — in fact, graphed, the events vs frequency curve is another Pareto (seems like everything is, nowadays). So that means that most of the time we are right, or near enough (sort of in the same way that I can usually make the horoscope for the day more or less “fit” my life)– but every now and again we are wrong, and sometimes spectacularly so.
I find this line of thought intriguing, but also unsettling, in a vertiginous way, because it seems to suggest that all actions are probably of equal validity, and everything is contingent — buy a goat, don’t buy a goat (or any other action). Things will happen later, and we can always link backward to whichever of the actions we took, to decide that A “caused” B (or failure to do A resulted in B rather than the preferable C). However, if Anderson/Watts/Ormerod are right, those apparently causal links are nothing more than sequences, and the universe is totally random, a bazillion steel pachinko balls bouncing down an endless board of jutting pins.
I’d be grateful to anyone who can help me out on that one (for the record, both Tolstoy and Dostoevsky took really good swings at this problem, and neither cracked it, even to his own satisfaction, so I am not at all confident I would get anywhere with it). What this DOES point out, however, is how very powerful the narrative into which we embed things becomes — and certainly one thing that we now have (vs say 200 years ago, or even 20 years ago) is lots more narratives, and thus lots more choices.
In fact, that is the message I took away from Shirky’s book (which I agree is a must-buy, must-read), that our aility to share and create narratives is now exponentially greater than it was even a decade ago — which enables a “battle of the narratives” that is impossible to win (since MY story isn’t YOUR story, and so on).
Sorry–this is too long, but I really liked your posting here.
Charles, that’s a really helpful and provocative comment. I wonder whether this impulse to narrate and explain, but not to predict, is a journalistic one. It seems to be that many of the people writing in this space are journalists, of one flavor or another. (I’d include Clay in that space, over his protests, and point out that at the end of his book, he refuses to predict whether social media will empower anti-social forces more than benign ones, preferring to focus on reporting what’s actually happening.)
Chris’s piece – which you can find here with some supporting essays: http://www.edge.org/3rd_culture/anderson08/anderson08_index.html – argues that in the Petabyte age, it no longer makes sense to offer a predictive model then test it to see whether or not it’s correct – instead, you just need to find ways to look at massive quantities of data, figure out what’s true, then explain it after the fact.
That last step is a critical one, I think. Without a model for how a complex system – Google search advertising, genetics or particle physics – works, it’s very hard to make decisions. We may make discoveries by analysing huge sets of data, but we understand those discoveries by telling stories, making models, narrating what occurs. And yes, the narratives we choose are critical, because none of them are perfect. (Working on another post right now on this, which may take a while to post, as it’s forcing me to go back and read guys like Baudrillard on simulations…)
Pingback: FreieNetze.de » Links für den 7.07.2007
Interesting how the comment thread evolved/transformed into a discussion about prediction-making. Charles Edward, I would recommend giving a listen (MP3) to the always-eloquent Niall Ferguson on the importance of coming up with alternative pasts as a way to temper our enthusiasm for making predictions about the future.
David–thanks for the recommendation– I will be happy to try Ferguson aurally (I confess to having had some trouble reading him, in that I kept trying to remember the difference between counter-factual history and fiction–like reading Harry Turtledove). I don’t think I was talking about prediction though so much as fretting about causality — wondering whether that which we think is causal is in fact simply sequential. But I guess that is the same point as yours, no?
Pingback: Akma » Summer Stromateis