How do news organizations measure impact?
That’s the question I found myself talking with Phil Bronstein of the San Francisco Chronicle earlier this week. He’d gotten in touch to talk about what tools are available to help newspaper editors track audience and reach for their stories, hoping that I’d have some insights on “cutting edge” techniques to track the reach and impact of news stories posted online. We talked a bit about the challenges of social media tracking after the demise of Technorati, the possible benefits of bit.ly-type analytics, questions of influence and reach raised by Klout and similar systems. All well and good, but measuring how many people read a story is something any web administrator should be able to do. Audience doesn’t necessarily equal impact.
There may have been a day in the rosy past of newspapers when a wall between the publisher and the editor meant that newsrooms published only what was most newsworthy and civically important, without consideration of a given story’s appeal to their audience. In an age where editors can know instantly whether a story on a school council meeting is playing better than a story about a labor action, it’s hard to believe that access to analytics doesn’t shape coverage decisions. Some outlets, like the Huffington Post, have embraced this new world to the point where they are poster children for analytics-driven coverage, using feedback from Google Analytics to inform most if not all decisions about story placement and emphasis. This willingness to respond rapidly to market feedback has likely helped HuffPo’s rapid audience and market growth – whether or not AOL’s acquisition of the site was a wise move, most newspaper publishers would welcome ten-figure interest in their properties.
The danger of traffic-based analytics driving journalism is that you may end up with newspapers that look more like Demand Media-style content farms and less like the civic guardians we want and need them to be. It’s certainly fair to observe that newspapers have been audience driven, at least in part, since inception and that some of the shortcomings of contemporary papers, as well as local newscasts, derive from a focus on driving readership and viewership. But adding an analytics into the newsroom puts the question “Is this story reaching a broad audience?” front and center in a way that’s hard to ignore or avoid.
If an ideal editor is making decisions based on what’s newsworthy, and a realistic editor is civic and audience concerns, how do editors determine whether they’re successfully serving both masters? What are appropriate analytics for civic impact?
As Phil and I talked, I found myself thinking about the LA Times’s coverage of obscenely high government salaries in the city of Bell, CA. In depth, investigative reporting by Ruben Vives and Jeff Gottlieb led to fraud trials, a turnover of the city government, and ultimately to a Pulitzer for the pair of reporters. The reporting on Bell, CA suggests two ways newspapers might measure civic impact: the arrest of bad guys, and the praise of one’s peers and professional societies. But these aren’t exactly quick metrics, and not every worthwhile piece of civic journalism has this magnitude of impact.
Traffic doesn’t seem to be the right measure of civic impact. A story that gets lots of page views or is widely shared might be civically relevant, but might also be salacious – amusing and popular as much of the Anthony Weiner coverage has been, I’m not sure it’s been a positive contributor to our civic involvement. Phil suggested that comments aren’t an adequate metric either. Stories that garner long comment threads could suggest broad involvement, but also may suggest partisan controversy.
I mentioned an idea that I’ve been trying to pitch for a while: in an age of participatory media, news demands participation. Or to quote Benjamin Barber, “People are apathetic because they are powerless, not powerless because they are apathetic.” For people to pay attention to an important story, it’s possible that we need to work to make it possible for people to have an impact on the outcome of the story.
Ideally, we can find better ways to do this than turning our Twitter icons green in solidarity with Iranian activists. Reporting on local civic issues offers the possibility of connecting people to opportunities for action in their own communities. And if newspaper web sites start trying to broker these connections, we gain another possible metric – the efficacy of a story in connecting people to community organizations, volunteering opportunities, and other forms of civic engagement.
That’s not a comfortable role for newspapers to take, Phil reminds me – it smacks of advocacy journalism. But the Bell, CA story is another form of activist journalism: by relentlessly shining a light on political malfeasance, Vives and Gottlieb were demanding that someone take action against these corrupt officials. Eventually, both citizens and prosecutors did. The difference between what I’m proposing and what the Pulitzer winning reporters did is that I’m suggesting newspapers link to possible solutions and measure how effective at driving engagement they are.
This would be far from a perfect metric. It wouldn’t tell you how many people read a story on homelessness, and then sought out community organizations on their own to volunteer with… though adding a feedback cycle where local organizations could communicate changes in community involvement to newspapers might. And it wouldn’t track one of the most critical functions of investigate journalism: the fear it generates in politicians and corporate actors that they could end up on the front page of a newspaper if they break the law. Clay Shirky is worried that losing this deterrence effect is one of the dangers of losing “accountability journalism” in the transition from broadcast, offline models of journalism to participatory, digital ones.
My point is not that I’ve got good metrics for civic engagement for newspaper journalism… or any journalism. It’s that we need to be thinking about finding and developing them. What we measure, we become. If we measure only how many people view, like or tweet, but not how many people learn more, act or engage, we run the risk of serving only the market and forsaking our civic responsibilities, whether we’re editing a newspaper or writing a blog.