Home » Blog » ideas » Stephen Downes, Anders Sandberg on Cloud Intelligence

Stephen Downes, Anders Sandberg on Cloud Intelligence

I had the pleasure of sharing the stage at Ars Electronica’s symposium on “Cloud Intelligence” with Stephen Downes and Anders Sandberg. I wasn’t able to liveblog their presentations, in part because I was jetlagged, and in part because they both gave extremely dense, thoughtful presentations. Better late than never, here are my notes days after the fact.

Canadian researcher Stephen Downes began the conversation on Cloud Computing at Ars Electronica with a complex, sophisticated talk about collaboration, conviviality and communication in virtual worlds. I’m slightly reluctant to blog it, because I’m still wrestling with the ideas, and suspect I won’t do justice to the complexity of the arguments… but here’s a gloss on some of the ideas.

Downes warns that our understanding of human behavior is based on folk psychology, an understanding of human nature based on fictions. Some of these fictions center on an incomplete analogy, an understanding of human communities as analagous to neurons in the brain.

This analogy leads to a picure of collaboration that implies a sort of sameness, where meaning is created out of the sameness of every entity in the network. It’s a unity that looks like a rock, or ingot of metal, a mass of elements, not of distinct parts. But that’s not actually how the brain works. What works in neural networks is diversity, though it’s a diversity of inputs and outputs that distinguishes one neuron from another. “A neuron’s perceptions – its connections – is unique to its network.”

This diversity can resemble our use of language – each person has a different connection to Paris, and everyone has a different connection when they say, “Paris is the capital of France.” To communicate, we need syntactical structures to be consistent. Wittgenstein’s language games show us that, while the meaning for one person and another might be very different, communication is possible via a negotiation process, based on each person’s individual understanding of the world – this negotiation is possible through syntactic structures.

The cloud suggests that we share a common set of (communication) infrastructure. Yet instead of becoming the same, we retain our individuality. The communication and interaction of diverse neurons is what creates consciousness. It’s not a process of collaboration but cooperation. “A city is not a group of people directed to create one goal, but each seeking their own goals.”

Downes references Donald Hebb, who documented the connection by similarity of neurons in the human brain. This research suggests that neurons cluster in a similar way that humans do, by common interest. They also cluster by proximity, which Downes reminds us that Hume saw as the key ingredient in understanding cause and effect. Neurons are effected by events, and experience feedback effects and also appear to seek stable states.

“When we consider a society of syntatically interacting entities, the property of any individual entity is not the property of the whole.” Downes cites Kevin Kelly, who warns us of a new socialism, a content-based socialism that is the same as the “authority-based power-law driven old capitalism.” Instead, we could hope for a form of socialism as personal empowerment, an equality of opportunity and sociality for individual, independent but interdependent entities.

What we need to understand about the cloud is that each entity works based on its own internal needs and drives. In a cloud, there’s an equality of opportunity, the ability for each entity or neuron to connect to the network as a whole. What we’re going to see is a diversity of perspectives, not a shared understanding. But through the interdependence of entities, we expect interaction, truth and knowledge to be emergent properties of this network.

Clouds based on conviviality would offer appropriate, congenial alternatives to social organization, systems that work better that our systems of star power and mass media. But building these systems recognizes that we’re communicating not just with words, but with each act and artifact we create.

I’m confident that I’ve gotten the talk wrong in more than one way, and equally confident that Stephen will join in the conversation and set me straight. Most helpful to me in understanding his themes was a correction Stephen offered to a comment I made in the question and answer session – I made a reference to the motivation of a group of people. Stephen argued that groups don’t have motivations – individuals do, and he argues we can’t understand how cooperation takes place until we focus on the idea that individuals are autonomous and motivated by their individual needs.

Following my talk at Ars, (mad) neuroscientist Anders Sandberg offered a talk on cloud superintelligence, the provocative idea that cooperation via networked technologies could help make humans much more intelligent than we are now. Sandberg offers the provocation that “Austria has 821,000,000 IQ points” before acknowledging that people often turn our minds off to a concept as amazing as superintelligence.

“Someone thinking a thousand times faster would be able to solve a lot of problems.” We might see remarkable gains if humans had more memory, or could maintain multiple forms of attention. Unfortunately, most study of group cognition is the study of how teams complete a task and what barriers stand in the way – Sandberg would like to see a much broader exploration.

He observes that team work studies generally involves logistics equations – the total work = the sum of individual work minus communication losses. “The best cogitive drug gives you a 20% advantage on certain tests. But asking a set of your friends to help you out might make you much better.” He asks us to imagine a dart-throwing contest. Even if you and your friends are all average dart throwers, a normal distribution means that five tries will give you a result one standard deviation away from the norm, and 30 tries will get two standard deviations away. “This pretty quickly turns you into a good dart thrower.” (True enough, though you need to be able to select the throws to keep. Two standard deviations from the norm might mean hitting the bullseye… or the bartender.)

Nodding to my objections that the cloud doesn’t inherently make us cosmopolitans, Sandberg notes “we’ve already got an incredible set of resources at our hand” through the Internet. He looks at “wikimagic”, the strange magic of wikipedia in which group editing appears to lead towards the improving quality of articles. While the quality of articles sometimes drops, “it requires huge amounts of noise to get the process of improvement to fall off.” That’s important, because people overestimate their own abilities and can make unhelpful contributions. As a whole, the quality tends to go up on popular webpages, while “if it’s an obscure neuroscience topic, there’s only three people who know the answer and they all disagree.”

Systems benefit by adding feedback. In building these complex systems, we need to manage social design and master the ethical design of feedback. Information markets, where people are incented to offer projections in the hopes of financial gain, and can sell shares, predictions tend to be surprisingly close to actual results.

Sanberg raises an important archaeological question for the crowd: “Why did Tasmanians lose the use of bone tools, fishing and warm weather clothing?” There’s evidence for these technologies early in their development, but they’d fallen out of use by the time Europeans encountered Tasmanians. He offers an explanation: people imitate other people and rarely innovate. This suggests that the number of technologies that can be retained by a people depends on population size. If you’ve got too small a group, you have insufficient experts. As a result, isolated groups of people tend to lose their cultures and technologies. (Conversely, perhaps we’re more able to preserve and advance cultures and technologies if the Cloud links us into very large groups.)

If we can build simple learning agents, which aren’t especially good at solving problems by themselves, perhaps we can solve more complex problems by adding more agents. He asks us to consider the problem of discovering blogs we want to read. One human reader suggests something, then branches out and lets other readers react to the suggestion. Initially, the participants are bad at communicating as the network is, essentially, random. But it quickly self-organizes to be good at amplifying and spreading particular stories, even though each reader is “resource-limited” – i.e., can consider only a few blogs. “With larger groups, information can cascade very effectively.”

How does this relate to superintelligence? “Often information can substitute for smarts.” He shows us “80 Million Tiny Images“, a project from MIT that’s collecting images of commmon nouns from the web and using them to vastly improve the abilities of computer systems and robots to identify objects. The system, he tells us, is very effective at identifying images, almost up to the human level performance, because it can rely on a huge set of images, which show a wide range of cars, rotated and placed in countless different ways. This sort of “practical form of superknowledge” is something we have to look forward to if we manage to harness the cloud in ways that enhance our skillsets. Perhaps they let us glance forward to a future posthuman realm, where this ability to problem-solve functionally turns us into “superorganisms”.