It started over dinner with a friend from France Saturday night. We mentioned our favorite crazy racing film (C’était un rendez-vous by Claude Lelouch, above, in which a Ferrari 275 GTB takes on downtown Paris at 220 kilometers per hour), which spurred him to recall Bill Joy‘s famous April 2000 thesis in Wired: Why the Future Doesn’t Need Us.
Our friend, a fluent IT brainiac, said the idea was simple: We’re moving very fast toward a technological future, but almost no one thinks about the consequences or the end point. Our pace today is similar to the physicists who tested the first nuclear bomb in the U.S. desert in 1945; Edward Teller warned that there was a three-in-a-million chance the explosion would set the entire atmosphere on fire, but the scientists went ahead anyway.
The trouble, Joy wrote, is if technology continues to progress, eventually machines will think, and when they do, those machines may not need people. Science fiction? No, just Moore’s law, in which computers double in power every 18 months:
I (once) believed that the rate of advances predicted by Moore’s law might continue only until roughly 2010, when some physical limits would begin to be reached …
But because of the recent rapid and radical progress in molecular electronics – where individual atoms and molecules replace lithographically drawn transistors – and related nanoscale technologies, we should be able to meet or exceed the Moore’s law rate of progress for another 30 years. By 2030, we are likely to be able to build machines, in quantity, a million times as powerful as the personal computers of today.
If Joy were right, we’re not talking about video cell phones in three decades. We’re talking about machines with human-like consciousness. And if computers someday wake up, it is not a stretch to think that self-thinking machines would learn to make copies of themselves. We could then either upload our souls into robots, or robots might just move on without us. We would control them for a while … but what then?
Along the way, technology advances pose additional risks. Nanotechnology and genetic engineering are much harder to control than nuclear secrets (which at least required centralized government control). Today, small cadres of scientists can pass the latest blueprints for life over the internet, tests and development are done around the world, and a single mistake by a single university could create a storm of gloop taking over the planet.
Joy’s article was lauded instantly. The Times of London compared it to Einstein’s 1939 warning to President Roosevelt that, soon, someone would build a nuclear bomb. Readers like Jeanne DesForges, associate VP, Morgan Stanley Dean Witter, were moved to weep at their desks. But the wisest reaction may have been that of Gregory Stock, director of the technology medicine program at UCLA, who wrote back:
I say that if we are one day to be transcended by machines, so be it … even the Dalai Lama once indicated he could imagine being reincarnated in a computer. So have a little faith, Bill. You may not like the future’s eventual shape, but your grandchildren – whoever or whatever they are – will probably think it’s just fine.