In a recent blog post, my friend and colleague Evgeny Morozov questions the responsibilities of academics who study internet censorship circumvention tools. As one of the academics Morozov mentions by name, I felt compelled to address his concerns. I should make clear that my response is on my own behalf, not on behalf of any of my colleagues at Berkman or elsewere.
Evgeny’s concern in his recent post appears to be that I haven’t publicly critiqued Haystack, a proposed censorship circumvention tool that’s received a great deal of laudatory press coverage. That’s true. Neither have I said anything positive about the tool on my blog or in the press.
Not all dialog takes place in blog posts or in op-eds In the security field, dialog ranges from in-person, off the record conversations to published scholarly research, and everywhere in between.
It would be a mistake to conclude that the “internet intellectuals” Evgeny calls out in this piece were silent on concerns about Haystack simply because we’ve been speaking privately, not publicly. I’ve offered counsel to several funders of circumvention tools about Haystack, offering concerns that the code and protocols were unpublished, unverified and untested. To the best of my knowledge, none of the people I’d spoken to ended up offering funding for the project. I spoke to any journalist who asked me about the project and offered a similar answer. In a post discussing his involvement with covering Haystack and potential conflicts of interest, Cyrus Farivar makes clear that I’ve expressed a great deal of skepticism to him offline about the project.
I’ve not published on Haystack for a very simple reason: I haven’t been able to conduct a proper evaluation of either the tool or the protocols behind it. I’ve been in contact with Austin for quite some time, seeking access to the code or an in-depth discussion of the protocol. I have high hopes that he’ll allow a version of the tool to be evaluated in the evaluation of circumvention tools we’re scheduled to carry out at Berkman later this year. I have also been in dialog with Iranian activists, none of whom reported using the tool, or working with anyone who used the tool – while this is reassuring that activists weren’t using untested tools, it’s been frustrating, as it’s also made it impossible for us to test the tool in the wild by working with someone using it. (Heap now reports that the tool was tested with roughly a dozen users in Iran, which helps explain why we had a hard time finding users of it.)
In providing an academic evaluation of these tools, it’s important for us to approach the field with a minimum of bias. I have a natural bias against tools that rely on security through obscurity, and it’s been hard for me to put this bias aside in evaluating circumvention tools. As it turns out, some of the closed-source tools developed by the GIFC group are some of the most impressive tools we’ve seen, a finding that’s reminded me to try to stick with an objective evaluation method and not with my preconceptions about the field. I don’t encourage the usage of tools I haven’t had the chance to evaluate, and as mentioned above, I’ve made clear to all interlocutors that I haven’t been able to evaluate Haystack. It is frustrating to me that I’m not able to act both as an advocate for or agains tools in this space and as a trusted evaluator – there’s a conflict between those roles that I’ve not been able to bridge and that limits me to speaking publicly about tools I have been able to evaluate.
I can understand Evgeny’s frustration with the popular press’s embrace of Haystack, and frankly I’ve shared that frustration. Had I praised Haystack, Evgeny would be well justified in calling me out. Evgeny feels that I’ve failed by not making public my concerns about Haystack before I evaluated it. Had I done so, Evgeny or any other commentator would have been justified in calling me irresponsible in rubbishing a tool without examining it, and critics of the testing methodology we’ve worked to develop at Berkman would have been right to ask questions about our objectivity as reviewers. Evaluating these tools forces my colleagues and me to be very careful about what we say regarding these tools, especially tools we have not been able to obtain information about.
I’m happy that the scrutiny of Haystack will lead to someone conducting a thorough evaluation of the code and protocol behind it. I hope that the justifiable anger over the press’s coverage of the tool will lead technology reporters to ask better questions before celebrating these tools unquestioningly, as my colleague Jillian York suggests.