In his 1993 essay “The Coming Technological Singularity” Vernor Vinge predicted that within 30 years advances in computing would lead to a superhuman intelligence. “Shortly after” that point, he wrote, “the human era will be ended.”
The theme of humans creating something larger than themselves is not new. Sci-fi films such as Star Wars and The Terminator are filled with robotic creatures; before them, in 1816 Mary Shelley spent a summer telling stories with friends in Geneva, Switzerland, and dreamed up Frankenstein; before then, in 3rd-century BC China, the text Lie Zi described a human-shaped automaton made of leather, wood and artificial organs. The Greek god Hephaestus (known as Vulcan by the Romans) had mechanical servants, and the Judeo-Christian scriptures allude to our own creator forging humans out of dust. Our root belief, it seems, is that something smart can be made out of materials that are stupid.
The best article ever written on the subject of post-human evolution was Bill Joy’s April 2000 masterpiece in Wired magazine, “Why the Future Doesn’t Need Us,” in which Joy described how a conversation with futurist Ray Kurzweil led him to believe that advances in technology would eventually make humans irrelevant. Joy quotes everyone from the Unibomber to Hans Moravec, founder of one of the world’s largest robotics programs, who wrote “biological species almost never survive encounters with superior competitors.”
Joy concluded that our emerging scientific progress in genetics, nanotechnology, and robotics will eventually create a world in which faster systems will replace the human race. His logic followed a simple mathematical fact: if processors continue to double in speed every 18 months, a la Moore’s law, by 2030 computers will be 1 million times as powerful as they were as of his writing in 2000 — “sufficient to implement the dreams of Kurzweil and Moravec.” Intelligent robots, once born, would of course initially work for us, but as their intelligence continues to scale, soon we would no longer understand them, and they would no longer need us. Conceivably, we eventually could port our minds into the new systems, but Joy concludes “if we are downloaded into our technology, what are the chances that we will thereafter be ourselves or even human?”
Heavy stuff. Not everyone agrees that this so-called “Singularity,” or rise of artificial intelligence, is possible. Kyle Munkittrick, program director for the Institute for Ethics and Emerging Technologies, recently recapped a raging online debate among AI thinkers on whether the Singularity will happen, with such objections as what makes humans intelligent is tied to our biological systems filled with raging hormonal emotions (crave sex or get pissed off lately?), or that human minds specialize in creative, abstract “right-brain” thinking that computers so far cannot duplicate.
Creativity, the nonlinear, non-deductive formulation of a conclusion, is especially difficult for computing systems. We can ask a machine what a brick is and it will get the answer right:
A small rectangular block made out of fired clay.
But ask “what are all the possible uses for a brick?” and computers become stumped. A human mind will win every time:
Building material. Doorstop. Weapon. Bug-whacker. Footrest. Canoe anchor. Forearm-weight trainer. Gum-off-chair scraper. Party gift. Color guide. A device to walk-with-on-your-head to improve posture.
The Turing test can mimic human intelligence, but a computer, no matter how fast, may never ideate or become aware of itself. Munkittrick writes that rather than a new abstract intelligence suddenly rising up, “the Singularity will be the result of a convergence and connection of human intelligence and artificial intelligence.” You’ll still be you, but you’ll plug in to things that make you smarter.
iPhones, your AI enabler
We see this already today in our use of mechanical or digital systems to surpass our biological limits. Eyeglasses help us see; tooth fillings help us eat; cars help us move; more recently, Google helps us remember. Munkittrick proposes that smartphones, which will soon be held by half the planet’s population, are a near-perfect form of “cybernetic left-brain augmentation” providing us with maps, dictionaries, sports scores, and visual overlays so we can better interpret the world around us. The creative right-brain thinking will always be us, but sorting data inputs into logical patterns may be something we leave to the tiny tablets in our hands. (Stop worrying, kids, you no longer have to memorize the past presidents of the United States.)
Even augmentation, though, will change our human intelligence. Is that good or bad? This is the current wrestling match between Nicholas Carr (author of “The Shallows,” his argument being Google-aided memory makes us all more stupid) and Clay Shirky (author of “Cognitive Surplus,” his counterpoint being technology empowers us to create smarter things in our free time, such as Wikipedia). Carr suggests we’re all getting lazy, skimming the surface of blogs and neglecting to memorize things, because your Mom’s phone number is now loaded on your smartphone. Shirky has a more hopeful point of view, admitting that in the short term every advance in technology initially results in a bunch of lousy output (cheesy novels after the invention of the printing press, crappy TV shows after the birth of video) but eventually gives rise to higher forms of art (Wuthering Heights; The King’s Speech). If you don’t like the silliness buzzing in social media today, just wait until we really figure out what to do with the new tools.
One hint as to where digital appendages are going is the world of art, where video, photography and music are becoming easier than ever before to produce professionally. Bill Green, advertising strategist and author of the blog Make the Logo Bigger, notes the photo at the top of this post looks like a lost work of art by Richard Avedon … but is really just a filtered snapshot by hobbyist Policromados, who used the new Instagram service to tweak a photograph. Technology is starting to empower the creative side of our brains as well.
The question isn’t when AI will come. We already rely on technology to get to the office, produce our work, craft our art, remember our facts, heat our food, and consume our entertainment. Our intelligence, defined as our awareness and interpretation of life, is already artificial.
Now that we know we are robots, we’re only debating the degree of scale.
Ben Kunz is vice president of strategic planning at Mediassociates, an advertising media planning and buying agency, and co-founder of its digital trading desk eEffective.