Ten years ago, I used to get pissed off when other companies implemented ideas before we had a chance to build them at Tripod. I’ve mellowed out in the intervening decade and now am generally just grateful when something I thought was a cool idea comes into being.
I’ve been talking with friends and colleagues for he past couple of years about starting a business based on the idea of human filtering – the idea that there are certain tasks that humans, working in parallel, will do better than computers will, at least in the forseeable future. I wrote a bit about this when I first applied for a fellowship at Berkman – part of my application included a section on research interests, where I listed “human filtering”:
After decades of Artifical Intelligence research, it’s become clear that certain tasks that are extremely easy for humans to perform are fiendishly difficult for computers to complete. Some of these tasks – image, speech and pattern recognition, text comprehension and analysis – have numerous practical, real-world applications. Human Filtering posits that the best way to solve these problems is not to write better code, but to mobilize people in developing nations to work in parallel with software to solve these tasks.
My interest in human filtering was based on an experience almost a decade ago at Tripod, where we discovered a solution to eliminating pornography from user homepages by harnessing some simple technology and the image recognition skills of underpaid college students. But my hope for the technology has always been that human filtering could be used to harness the talents of people in the developing world, who could do summarize text, transcribe audio files and identify features in images, solving problems that were difficult to solve with algorithms, and making a decent wage in the process. I’ve even written a business plan, with a team in Accra crunching data and a small team in the US, travelling to Fortune 500 companies to identify problems that could be solved through human filtering.
Fortunately, Amazon beat me to the punch. (Or, more likely, Amazon bought a company that beat me to the punch.) They’ve announced a new service called Mechanical Turk – it’s a reference to an automaton, built in the late 18th century, that played excellent chess. Hundreds were fooled by the contraption, until it was eventually revealed that it worked because, contained within the heart of the machine, was a small man – a skilled chess player – who moved a series of levers to make his moves on the board.
Amazon’s Mechanical Turk tries to solve complex problems by farming them out to web users around the world. Experimenting with the site (which is badly overloaded and mostly broken) yesterday and today, I got assigned two types of tasks. The most common were a classic example of a human filtering problem: image identification that would be hugely difficult for an algorithm, but fairly easy for a human.
Amazon’s A9 search engine includes a feature called Blockview – it lets you see photos of buildings on both sides of the street in certain cities. (In Fargo, all the pictures are snowcovered.) Evidently, photographers drove down the streets, taking photos every few seconds. A9 would like to provide photos of businesses – this requires aligning the photos with the street addresses on record for businesses. So Mechanical Turk presents me with the business name and street address, and asks me to choose which of five photos best represents the business. I enter my selections, they get reviewed (somehow) by the people who’ve assigned the task, and they pay me $0.03 for my time.
It would be hard to overstate just how difficult this task would be for a computer algorithm. In one of the cases I saw, I had to decide that “Smith and Company Insurance” was probably the squat building with the Allstate sign on top – making that decision algorithmically would have required an amazing amount of common-sense knowledge. In other cases, the matching requires picking very small pieces of text from low-contrast images, something that’s notoriously difficult for computer vision algorithms to identify. It took me roughly 15 seconds per task, once the images had loaded.
It’s probably not worth my time at $0.03 per task to identify images – even once the system’s working correctly and I can get four tasks to load per minute, it’s $7.20 an hour. In Ghana, however, $7.20’s a great hourly wage. Assume it costs about $1 an hour to purchase time at a cybercafe (which it is in Accra) – it’s certainly possible that Mechanical Turk could provide a decent living for someone with free time and adequate connectivity. Most attractively, if you’re an unemployed young man in Accra, you don’t need any formal qualifications to start this job, just a valid email address.
(Of course, getting paid is a problem. Amazon lets you collect credit and apply it to Amazon purchases, or will remit funds to a US bank account. Neither of these is a particularly good solution for my theoretical Ghanaian freelancer. It will be interesting to see if Amazon tries to solve this problem in the future – is there sufficient interest in this sort of piecework domestically that Amazon doesn’t need to accomodate workers in other nations?)
Think folks won’t be interested in doing this sort of work? Take a look at the phenomenon of gold farming, where players of massively multiplayer online games complete a repetitive task over and over to gain in-game currency, which they pass to bosses, who sell it online (usually through eBay) and pass some of the money back to the farmers. I’d be willing to bet that someone, somewhere, is trying to figure out how to make money on Mechanical Turk using accounts in the US and workers in Ghana, the Phillipines, or Turkey.
The most intriguing part of the Mechanical Turk project, in my opinion, is that Amazon is running it as a web service. This means that anyone who’s got a task which could be distributed to thousands of users can write a web services application and turn the wisdom of the web onto the problem… much as we did with the Katrina Peoplefinder project.
What can thousands of web users do for you? Pretty much anything you’re willing to pay them to do.
Hey Ethan, thanks for providing your always valuable “360 degrees of perspective” on this particular topic. I too tried to revue the service, but ran into similar difficulties and eventually gave up.
When I met with the A9 team last June, I presented a perspective on their Yellow Pages service within the context of technology serving human needs. While we didn’t quite meet at the apropos time—they were really just looking for an interface designer grunt to whip out layout—I did have a few interesting conversations over the course of the day.
A9 is very algorithm oriented (“A9” standing for the nine letters in the word “algorithm”), as is the majority of the Valley (as you well know). The interesting part of our conversation that day dealt with the potential partnership between human participation *and* mathematical techniques of information retrieval. We spent a good deal of time discussing persona modeling and context scenarios which could drive instances of smart folksonomies, from both a BtoB perspective and an end-user interaction model.
The Mechanical Turk concept is a pretty smart way to begin the merger between thesauri-level tags across multi-level information objects; information retrieval and intelligent presentation in the interface is moving forward at light speed.
And it would be *criminal* if Amazon didn’t open up this service to all of the world’s inhabitants.
Hi,
We are very interested in this model of data collection. But because Yellowikis (www.yellowikis.org) is free and open it is possible for anyone to sell their Human Intelligence services to a local businesses. The ‘editor’ adds their clients contact information to the Yellowikis database – and gets to keep everything they can earn, cash in hand. We just provide the infrastructure that is indexed every few days by Google so clients quickly appear in the Google search results.
Result: Happy client, Happy Editor.
Let me know if you think this might work in Accra. Maybe we can set-up a pilot programme?
Human filtering will never work as there is way too much to sift through these days on the net. It’s great but impossible.
Pingback: Office Max!
Pingback: mtl3p
Ethan – Thanks for a fascinating and informative summary of this new approach to solving particular types of problems. I like that you see this as a possible source of employment for people in the developing world. I’m also wondering if there aren’t some problems of development that could be approached this way. Finding sites for wells based on satellite photos? Identifying disease clusters? Anybody have a better idea?
Pingback: Global Voices Online » Blog Archive » Voices from Zimbabwe Plus
Lawrence – I love the idea. I think the challenge is identifying the problems where human filtering could help on developing world problems. For instance, do satellite photos exist of places where one might want to place wells? Is it easy or difficult to train folks to identify those sites? Those are the sorts of challenges smart programmers have to figure out before you can turn a task over to a team that does human filtering.
Pingback: Newsvine: The Wisdom Of The Crowd at connecting*the*dots
This Idea is great, but I dont think its possible..
but it doesnt hurt to try.
Pingback: Newsvine: The Wisdom Of The Crowd | You Are In An Open Field
Comments are closed.