Kevin Kelly acknowledges that he believes in a dangerous idea – technological determinism, the idea that our technology shapes and determines who we are and how me live. He acknowledges that he’s got an uneasy relationship with technolgy – he spent years as a nomad, living out of a bicycle and a backpack. But he’s fascinated by the idea that technology has its own agenda and directions.
He offers us some big numbers:
– 1 billion pcs
– 1 million emails per second
– 1 million IMs per second
– 8 terabytes per second Internet traffic
– 65 billion phonescalls a year
He imagines this not just as a picture of the internet, but as a very large computer, with 200 terabytes of RAM, 20 exabytes of storage, ten terabytes of bus speed. This computer starts looking a lot like the human brain. The 1 trillion links on the web is almost the number of synapses in a human brain. The 1 quintillion transistors is almost the number of neurons in the human brain. “The parameters of the web as a whole are very similar to the human brain.”
Why is this interesting? Because our brains aren’t doubling in size every two years. Kelly tells us that sometime between 2020 and 2040, the web will exceed human beings in computing power. “Our collective intelligence will be dwarfed by all the things we’re making.”
So what is it that technology wants?
– Technology wants to increase its efficiency and power, as Moore’s law and Kurzweil’s calculations of process speed indicate.
– Technology wants to replicate itself freely, pushing against limits like copyright.
– Technology doesn’t want to be turned off. Trying to turn back the technological clock – outlawing guns in Japan or the crossbow in Europe – rarely lasts more than a generation.
Kelly argues that technology alters environments to perpetuate itself. He offers the example of the melting of icecaps from global warming. These melting icecaps open a northern route for oil tankers, allowing more oil tankers to deliver fuel… creating more global warming. Does this mean that we’re “paving the earth into the technoplanet”? But Kelly insists that tech wants what we want – it wants clean water, because we need clean water for manufacturing. It wants diversity, as seen by the rise of mass customization and personalization…
What does Kelly worry about? He’s concerned about all self-replicating technologies: genetic engineering, nanotech, and robotics. It’s critical that we train our technologies, so that they follow Asimov’s rules of robotics: don’t hurt humanity, don’t hurt humans, obey humans, protect yourself, and otherwise have fun. He argues that there are no bad technologies, just bad parenting, giving DDT as an example – it’s a bad thing to spray DDT on crops, but great to spray in rural buildings to prevent malaria.
Technology, like children, are not neutral – they’re creative forces for good. They can turn good or bad, but civilization depends on having the good outweigh the bad, even by a thin margin. Civilization is the 1% difference between the good and the bad.
Technology gives us choices, including the choice to turn it off. But the rise of technology means that children will have a chance to express themselves through media that haven’t yet been created yet. Imagine Mozart in a world without pianos, or Hitchcock before the invention of film. Children are being born whose technologies have not been found yet, which means we have a moral oblication to let technology grow and increase.
(Kelly may be being intentionally provocative here. At least I hope so.)
I hope KK was being silly or was misunderstood. There’s an argument to be made for technology begetting more technology, but this example is really stupid. If global warming causes more violent storms which make offshore drilling more expensive, lowering emissions and mitigating global warming, would KK claim this was also an instance of technology perpetuating itself, although the effect is the opposite?
My quibble is not with global warming or not, but the irrelevance of the example. Something like correlation is not causation, but not exactly.
I also have a quibble with the piece because it seems to be based on the underlying (false) assumptions – first, that at some point, all this technology will collectively become self-aware and self directing (I understand that having even one self aware component will have the practical effect of meaning the whole thing was self aware in a networked world) which seems just a little off and second that large computational capacity is equivalent with a high intelligence.
Both these assumptions have been proved wrong – look at deep thought that now has serious competition from desktop computers that have much less computational capability but use more sophisticated algorithms (intelligence) – (OK
That is very weird, my comment got truncated. Do you have a limit of length of comments in place?
ntwiga why do you say it assumes all technology being self aware? the way I see it he considers human to be part of the super structure of technology, if we reach a point where the weight of the non human part is much larger than the human part then it’s fundementally different even if the only self aware functions are still exclusivly in the human modules.
Comments are closed.