Philosopher Helen Nissenbaum is one of the leading thinkers about the ethical issues we face in digital spaces. Her work on privacy as “contextual integrity” is one of the most tools for understanding why some online actions taken by companies and individuals feel deeply transgressive while others seem normal – we expect online systems to respect norms of privacy appropriate for the context we are interacting in, and we are often surprised and dismayed when those norms are violated. At least as fascinating, for me at least, Nissenbaum doesn’t just write books and articles – she writes code, in collaboration with a team of longtime collaborators, which brings her strategies for intervention into the world, where others can adopt them, test them out, critique or improve them.
Professor Nissenbaum spoke at MIT last Thursday about a new line of inquiry, the idea of obfuscation as a form of resistance to “data tyranny”. She is working on a book with Finn Brunton on the topic and, to my delight, more software that puts forward obfuscation as a strategy for resistance to surveillance.
Her talk begins by considering PETs – privacy enhancing technologies – building on definitions put forth by Aleecia McDonald. In reacting to “unjust and uncomfortable data collection”, we wish to resist but we do not have the capacity within the systems themselves. We can create privacy enhancing tools as a mode of self-help, and tools that leverage obfuscation fit within the larger frame of PETs, self-help and resistance.
She defines “data tyranny” drawing on work by Michael Walzer, whose work focuses on approaches to ethics in practice: “You are tyrannized to the extent that you can be controlled by the arbitrary decision of others.” Obfuscation, Nissenbaum tells us, fights against this tyranny.
Using her framework of privacy as contextual integrity, from Privacy in Context (2010), she explains that privacy is not complete control of our personal information, nor is it perfect secrecy. Instead, it is “appropriate information flow that is consistent with ideal informational norms.” This contextual understanding is key, she explains, “I don’t believe that a right to privacy is a right to control information about oneself, or that any spread of information is wrong.” What’s wrong is when this flow of information is inconsistent with our expectations given the context. Sharing information about what books we’re searching for with the librarian we asked for help doesn’t mean we’ve consented to share that information with a marketer looking to target advertisements to us – we expect one sort of sharing in that context and are right to feel misused when those norms are bent or broken.
Different privacy enhancing technologies use different strategies. One project Nissenbaum has collaborated on uses cryptography to facilitate resistance. Cryptagram (think Instagram plus crypto) allows you to publish password-protected photos on online social media. The photos appear as a black and white bitmap, unless you have the password (and the Cryptagram Chrome plugin installed). By encrypting the photos (with AES, and rendering the output as a JPEG), you gain finer control over Facebook’s ever-changing privacy settings, and you prevent whoever is hosting your media from attempting to identify faces in photos, or building a more detailed profile of you using information gleaned from the images.
Other PETs use data obfuscation as their core tool of resistance. Nissenbaum defines obfuscation as “The production, inclusion, addition or communication of misleading, ambiguous, or false data in an effort to evade, distract or confuse data gatherers or diminish the reliability (and value) of data aggregations.” In other words, obfuscation doesn’t make data unreadable, it hides it in a crowd.
Good luck finding Waldo now.
TrackMeNot is a project Nissenbaum began in 2006 in collaboration with Daniel Howe and Vincent Toubiana. It was a reaction to a Department of Justice subpoena that sought search query data from Google as a way of documenting online child pornography, as well as the deanonymization of an AOL data set, suggesting that individuals could be personally identified on the basis of their search queries. “The notion that all searches were being stored and held felt shocking at the time,” Nissenbaum explained. “Perhaps we all take it for granted now.”
Her friends in computer science departments told her “there’s nothing you can do about it”, as Google is unlikely to change their policies about logging search requests. So she and her colleagues developed TrackMeNot, which sends a large number of fake search queries to a search engine, along with the valid query, then automatically sorts through the results the engine sends back, presenting only the valid query. The tool works within Firefox, and she reports that roughly 40,000 users use it regularly. You can watch what queries the tool is sending out, or choose your own RSS feed to generate queries from. (From the presentation, it looks like, by default, the tool is subscribing to a newspaper, chopping articles into n-grams and sending random n-grams to Google or other engines as “cover traffic”.)
The tool has prompted many reactions, including objects that TrackMeNot doesn’t work or is unethical. Security experts have suggested to her that search engines may be able to sort out the chaff from the wheat, filtering aside her fake queries. The questions about the ethics of obfuscation are at least as interesting to Nissenbaum as a philosopher.
Obfuscation, she tells us, is everywhere. It’s simply the strategy of hiding in plain sight. She quotes G.K. Chesterton, from The Innocence of Father Brown: “Where does a wise man hide a leaf? In the forest. But what does he do if there is no forest? He grows a forest to hide it in.”
With Finn Branton, she has been investigating historical and natural examples of leaf hiding and forest growing strategies. Some are easy to understand: if you don’t want your purchases tracked by your local supermarket, you can swap loyalty cards with a friend, obfuscating both your profiles. Others are more technically complicated. During the second World War, bomber pilots began releasing black paper backed with aluminum foil before releasing their payloads. The reflective paper obscured their signal on radar, showing dozens or hundreds of targets instead of the single plane, making it possible for bombers to evade interception.
Programmers and hardware engineers routinely obfuscate code to make it difficult to replicate their work. (As someone who has managed programmers for two decades, I think unintentional obfuscation is at least as common as intentional…) But some of the best examples of obfuscation are less technical in nature. Consider the Craigslist robber, who robbed a bank in Monroe, Washington in 2008, and got away by fading into a crowd. He’d put an ad on Craigslist, asking road maintenance workers to show up for a job interview by asking them to dress in a blue shirt, yellow vests, safety goggles, a respirator mask. A dozen showed up, and the robber – also wearing the outfit – was able to get away.
Nature obfuscates as well. The orb spider, who needs her web out in the open to catch prey, needs to avoid becoming prey herself. She builds a target of dirt, grass and other material in the web, exactly the size of the spider, hoping to lure wasps to attack the diversion instead of her.
Does obfuscation work? Is it ethical? Should it be banned? Examples like the orb spider suggest that it’s a natural strategy for self-preservation. But examples like Uber’s technique of calling Gett and Lyft drivers for rides, standing them up and then calling to recruit them, which Nissenbaum cites as another example of obfuscation, raise uncomfortable questions.
“What does it mean to ask ‘Does it work?'” asks Nissenbaum. “Works for what? There is no universally externalizable criterion we can ask for whether obfuscation works.” Obfuscation could work to buy time, or to provide plausible deniability. It could provide cover, foil profiling, elude surveillance, express a protest or subvert a system. Obfuscation that works for one or more criteria may fail for others.
The EFF, she tells us, has been a sometimes fierce critic of TrackMeNot, perhaps due to their ongoing support for Tor, which Nissenbaum makes clear she admires and supports. She concedes that TrackMeNot is not a tool for hiding your identity as Tor is, and notes that they’ve not yet decided to block third-party cookies with TrackMeNot, a key step to providing identity protection. But TrackMeNot is working under a different threat model – it seeks to obfuscate your search profile, not disguise your identity.
She and Brunton are working on a taxonomy of obfuscation, based on what a technique seeks to accomplish. Radar chaff and the Craigslist robber are examples of obfuscation to buy time, while loyalty card swapping, Tor relays and TrackMeNot seek to provide plausible deniability. Other projects obfuscate to provide cover, to elude surveillance or as a form of protest, as in the apocryphal story of the King of Denmark wearing a yellow Star of David and ordering his subjects to do so as well to obscure the identity of Jewish Danes.
For Nissenbaum, obfuscation is a “weapon of the weak” against data tyranny. She takes the term “weapon of the weak” from anarchist political scholar James C. Scott, who used the term to explain how peasants in Malaysia resist authority when they have insufficient resources to start a rebellion. Obfuscatory measures may be a “weak weapon of the weak”, vulnerable to attack. But these methods need to be considered in circumstantial ways, specific to the problem we are trying to solve.
“You’re an individual interacting with a search engine”, Nissenbaum tells us, “and you feel it’s wrong that your intellectual pursuit, your search engine use, should be profiled by search engines any more than we should be tracked in libraries.” The search engines keep telling you “we’re going to improve the experience for you”. How do we resist? “You can plead with the search engine, or plead with the government. But the one window we have into the search engine is our queries.” We can influence search engines this way “because we are not able to impose our will through other ways.”
This tactic of obfuscation as the weapon of the weak is one she’s bringing into a new space with a new project, Ad Nauseum, developed with Daniel Howe and Mushon Zer-Aviv. The purpose of Ad Nauseum is pretty straightforward: it clicks on all the ads you encounter. It’s built on top of Adblock Plus, but in addition to blocking ads, it registered a click on each one, and also collects the ads for you, so you can see them as a mosaic and better understand what the internet thinks of you.
Again, Nissenbaum asks us to consider strategies of obfuscation as having many strategies towards many ends. These strategies differ in terms of the source of obfuscation, the amount and type of noise in the system, whether targets are selective or general, whether the approach is stealthy or bald-faced, who benefits from the obfuscation and the resources of who you are trying to hide from. Ad Nauseum is bald-faced, general, personal in source (though it benefits from cooperation with others) and is taking on an adversary that is less powerful than the NSA, but perhaps not much less powerful.
Aside from questions of whether this will work, Nissenbaum asks if this is ethical. Objections include that it’s deceptive, that it wastes resources, damages the system, and enables free riding on the backs of your peers. In an ethical analysis, she reminds us, ends matter, and here the ends are just: eluding profiling and surveillance, preserving ideal information norms. This is different from robbing a bank, destroying a rival, or escaping a predator.
But means matter too. Ethicists ask if means are proportionate. If there is harm that comes from obfuscation, can another method work as well? In this case, that other method can be hard to see. Opting out hasn’t worked, as the Do Not Track effort collapsed. Transparency is a false solution, as companies already flood us with data about how they’re using our data, leading us to accept policies we don’t and can’t read. Should we shape corporate best practice? That’s simply asking the fox to guard the henhouse. And changing laws could take years if it ever succeeds.
In exploring waste, free ride, pollution, damage or subversion, Nissenbaum tells us, you must ask “What are you wasting? Who’s free riding? What are you polluting? Whose costs, whose risks, whose benefits should we consider.” Is polluting the stream of information sent to advertisers somehow worse that polluting my ability to read online without being polluted by surveillance?
Big data shifts risks around, Nissenbaum tells us. As an advertiser, I want to spend my ad money wisely. Tracking users shifts my risk in buying ads. The cost is backend collection of data, which places people at risk: think of recent revelations from Home Depot about stolen credit card information. Databases that are collected for the public good, for reasons like preventing terrorism, may expose individuals to even greater risk. We need a conversation about whether there are greater goods to protect than just keeping ourselves free of terrorism.
We can understand weapons of the weak, Nissenbaum tells us, by understanding threat models. We need to study the science, engineering and statistical capabilities of these businesses. In the process we discover “enabling execution vectors”, ways we can attack these systems through hackable standards, open protocols, and open access to systems. And we need to ensure that our ability to use these weapons of the weak is not quashed by enforceable terms of service that simply prevent their use. Without having access to the inner machination of systems, Nissenbaum argues, these weapons may be all we have.
An exceedingly lively conversation followed this talk. I was the moderator of that conversation, and so I have no good record of what transpired, but I’ll use this space – where I usually discuss Q&A – to share some of my own questions for Professor Nissenbaum.
One question begins by asking “What’s the theory of change for the project?” If the goal us to collapse the ad industry as we know it, I am skeptical of the project’s success at scale. Clicking ads is an extremely unusual behavior for human websurfers – clickthrough on banner ads is a tiny fraction of one percent for most users. Clicking on lots of ads is, however, frequent behavior for a clickfraud bot, a tool that’s part of a strategy in which a person hosts ads on their site, then unleashes a program to click on those ads, giving micropayments to person for each ad clicked. Essentially, it defrauds advertisers to reward a content provider. Clickfraud bots are really common, and most ad networks are pretty good at not paying for fraudulent clicks. This leads me to conclude that much of what Ad Nauseum does will be filtered out by ad networks and not counted as clicks.
This is a good outcome, Nissenbaum argues – you’ve disguised your human behavior as bot behavior and encouraged ad networks to remove you from their system. But it’s worth thinking about the costs. If I am a content provider, attracting human users who look like bots has two costs. One, I get no revenue from them as ad providers filter out their clicks. Two, I may well lose advertisers as they decide to move from my bot-riddled site and to sites that have a higher proportion of “human” readers. She’s shifted the cost from the reader to the content host rather than crashing the system. (Offering this scenario to Nissenbaum, using myself as an example of a content provider, and positing a Global Voices that was supported by advertising, I spun a case of our poor nonprofit going under thanks to her tool. In response, she quipped “Serves you right for working with those tracking-centric companies.” I’m looking forward to a more nuanced answer if she agrees with the premise of my critique.)
If the theory of change for the project is sparking a discussion about the ethics of advertising systems – a topic I am passionate about – rather than crashing the ad economy as we know it, I’m far more sympathetic. To me, Ad Nauseum does a great job of provoking conversations about the bargain of attention – and surveillance – for free content and services. I just don’t see it as an especially effective weapon for bringing those systems to their knees.
My other question centers on the idea that this technique is a “weapon of the weak”. To put it bluntly, a tenured professor at a prestigious university is nowhere near as disenfranchised as the peasants Scott is writing about. This isn’t a criticism that Nissenbaum is disrespecting the oppression of those worse off than she is, or a complaint about making a false comparison between two very different types of oppression. Instead, it’s a question about the current state of the political process in the United States. When a learned, powerful and enfranchised person feels like she’s powerless to change the regulation of technology, something is deeply messed up. Why does the regulation of technology turn the otherwise powerful into the weak? (Or is this perception of weakness the symbol of a broader alienation from politics and other forms of civic engagement?)
I’ve been advocating a theory of civic efficacy that considers at least four possible levers of change: seeking change through laws, through markets, through shaping social norms or through code. Using this framework, we could consider passing laws to ensure that the FTC protects user privacy on online systems. Or we could try to start companies that promise not to use user data (DuckDuckGo, Pinboard) and try to ensure they outcompete their competitors. We could try to shape corporate norms, seeking acknowledgement that “gentlemen do not read each other’s mail”.
If Nissenbaum’s solution were likely to crash advertising as we know it, it might be superior to any of these other theories of change. If it is, instead, a creative protest that obscures an individual’s data and makes a statement at the cost of damaging online publishers, it raises the question of whether these means are justifiable or others bear closer consideration.
I tend to share Nissenbaum’s sense that advocating for regulation in this space is likely to be futile. I have more hopes for the market-based and norms-based theories – I support Rebecca MacKinnon’s Ranking Digital Rights project because it seeks to make visible companies that protect digital rights and allows users to reward their good behavior.
But I raise the issue of weapons of the weak because I suspect Nissenbaum is right – I have a hard time imagining a successful campaign to defend online privacy against advertisers. If she’s right that many of us hate and resent the surveillance that accompanies an ad-supported internet, what’s so wrong in our political system that we feel powerless to change these policies through conventional channels?
Pingback: 10.14: future of work; obfuscation; upcoming events · Data & Society
Pingback: Somewhere else, part 176 | Freakonometrics
Pingback: L’obfuscation, l’arme du faible ? | Ressource Info
Pingback: En Bref … - CNIS mag
Pingback: L’obfuscation, stratégie de résistance à la surveillance ? | InternetActu
Pingback: L’obfuscation, stratégie de résistance à la surveillance ?
Pingback: Datenkraken trollen | Metronaut.de
Comments are closed.