A brief history of how you stopped being human


In his 1993 essay “The Coming Technological Singularity” Vernor Vinge predicted that within 30 years advances in computing would lead to a superhuman intelligence. “Shortly after” that point, he wrote, “the human era will be ended.”

The theme of humans creating something larger than themselves is not new. Sci-fi films such as Star Wars and The Terminator are filled with robotic creatures; before them, in 1816 Mary Shelley spent a summer telling stories with friends in Geneva, Switzerland, and dreamed up Frankenstein; before then, in 3rd-century BC China, the text Lie Zi described a human-shaped automaton made of leather, wood and artificial organs. The Greek god Hephaestus (known as Vulcan by the Romans) had mechanical servants, and the Judeo-Christian scriptures allude to our own creator forging humans out of dust. Our root belief, it seems, is that something smart can be made out of materials that are stupid.

The best article ever written on the subject of post-human evolution was Bill Joy’s April 2000 masterpiece in Wired magazine, “Why the Future Doesn’t Need Us,” in which Joy described how a conversation with futurist Ray Kurzweil led him to believe that advances in technology would eventually make humans irrelevant. Joy quotes everyone from the Unibomber to Hans Moravec, founder of one of the world’s largest robotics programs, who wrote “biological species almost never survive encounters with superior competitors.”

Joy concluded that our emerging scientific progress in genetics, nanotechnology, and robotics will eventually create a world in which faster systems will replace the human race. His logic followed a simple mathematical fact: if processors continue to double in speed every 18 months, a la Moore’s law, by 2030 computers will be 1 million times as powerful as they were as of his writing in 2000 — “sufficient to implement the dreams of Kurzweil and Moravec.” Intelligent robots, once born, would of course initially work for us, but as their intelligence continues to scale, soon we would no longer understand them, and they would no longer need us. Conceivably, we eventually could port our minds into the new systems, but Joy concludes “if we are downloaded into our technology, what are the chances that we will thereafter be ourselves or even human?”

Heavy stuff. Not everyone agrees that this so-called “Singularity,” or rise of artificial intelligence, is possible. Kyle Munkittrick, program director for the Institute for Ethics and Emerging Technologies, recently recapped a raging online debate among AI thinkers on whether the Singularity will happen, with such objections as what makes humans intelligent is tied to our biological systems filled with raging hormonal emotions (crave sex or get pissed off lately?), or that human minds specialize in creative, abstract “right-brain” thinking that computers so far cannot duplicate.

Creativity, the nonlinear, non-deductive formulation of a conclusion, is especially difficult for computing systems. We can ask a machine what a brick is and it will get the answer right:

A small rectangular block made out of fired clay.

But ask “what are all the possible uses for a brick?” and computers become stumped. A human mind will win every time:

Building material. Doorstop. Weapon. Bug-whacker. Footrest. Canoe anchor. Forearm-weight trainer. Gum-off-chair scraper. Party gift. Color guide. A device to walk-with-on-your-head to improve posture.

The Turing test can mimic human intelligence, but a computer, no matter how fast, may never ideate or become aware of itself. Munkittrick writes that rather than a new abstract intelligence suddenly rising up, “the Singularity will be the result of a convergence and connection of human intelligence and artificial intelligence.” You’ll still be you, but you’ll plug in to things that make you smarter.

iPhones, your AI enabler

We see this already today in our use of mechanical or digital systems to surpass our biological limits. Eyeglasses help us see; tooth fillings help us eat; cars help us move; more recently, Google helps us remember. Munkittrick proposes that smartphones, which will soon be held by half the planet’s population, are a near-perfect form of “cybernetic left-brain augmentation” providing us with maps, dictionaries, sports scores, and visual overlays so we can better interpret the world around us. The creative right-brain thinking will always be us, but sorting data inputs into logical patterns may be something we leave to the tiny tablets in our hands. (Stop worrying, kids, you no longer have to memorize the past presidents of the United States.)

Even augmentation, though, will change our human intelligence. Is that good or bad? This is the current wrestling match between Nicholas Carr (author of “The Shallows,” his argument being Google-aided memory makes us all more stupid) and Clay Shirky (author of “Cognitive Surplus,” his counterpoint being technology empowers us to create smarter things in our free time, such as Wikipedia). Carr suggests we’re all getting lazy, skimming the surface of blogs and neglecting to memorize things, because your Mom’s phone number is now loaded on your smartphone. Shirky has a more hopeful point of view, admitting that in the short term every advance in technology initially results in a bunch of lousy output (cheesy novels after the invention of the printing press, crappy TV shows after the birth of video) but eventually gives rise to higher forms of art (Wuthering Heights; The King’s Speech). If you don’t like the silliness buzzing in social media today, just wait until we really figure out what to do with the new tools.

One hint as to where digital appendages are going is the world of art, where video, photography and music are becoming easier than ever before to produce professionally. Bill Green, advertising strategist and author of the blog Make the Logo Bigger, notes the photo at the top of this post looks like a lost work of art by Richard Avedon … but is really just a filtered snapshot by hobbyist Policromados, who used the new Instagram service to tweak a photograph. Technology is starting to empower the creative side of our brains as well.

The question isn’t when AI will come. We already rely on technology to get to the office, produce our work, craft our art, remember our facts, heat our food, and consume our entertainment. Our intelligence, defined as our awareness and interpretation of life, is already artificial.

Now that we know we are robots, we’re only debating the degree of scale.

Ben Kunz is vice president of strategic planning at Mediassociates, an advertising media planning and buying agency, and co-founder of its digital trading desk eEffective.


5 thoughts on “A brief history of how you stopped being human

  1. Very enjoyable post, Ben. For a long time I have enjoyed pondering how we are incorporating technology into our own evolution, literally. (See, for example: Brain/Machine Interfaces (BMI): Something to Think About.) However, I’m not sure I entirely go along with your conclusion that our intelligence is already artificial (though, perhaps its just a difference in semantics).

    Our perceptions are, undeniably, enhanced by technology. But still, there is something within us that is doing the perceiving and, more importantly, experiencing the perceiving. To borrow an idea from Thomas Nagel, we all sense “what it is like” to perceive something. Have we built machines that have that same sense? I am not arguing that we never can, but I have seen no evidence that we already have.

  2. Interesting, thought-provoking post.

    Thoughts:

    1. How does natural selection play a roll in our future? Will technology be adopted or rejected in the same way species & evolutionary traits get adopted or rejected? Will it finally have more to do with performance over preference?

    2. Perception is key. I’ve always believed that if an experience isn’t being perceived by an observer, the experience is irrelevant. Which means, if human perception is ever synthesized it could give rise to experiences being created for non-humans. This could have a dramatic affect on usability/ UX standards. – I created a presentation on perception that outlines my thoughts: http://www.slideshare.net/thejordanrules/perception-1766948

    3. The saddest part of becoming more artificial are those stories we hear about people losing their humanity due to technology. Things like car accidents due to talking on the phone, suicide due to online bullying, or destructive online groups like http://proanaonline.com

    I’m hoping that the future will bring us closer to each other by offering everyone a better understanding of everyones lives. I think key human traits like morality, compassion, empathy, and grace need to begin to be considered when creating the next generation of user experiences.

  3. Julian, love your thoughts.

    Re natural selection, yes, it should work the same. Natural selection is based on three ideas: offspring are diverse (via mutations or new products); there are usually more offspring than an environment can support; and over time, the offspring most suited to the environment win and crowd the others out. We see this happening in nonbiological systems such as social networks and mobile apps already.

    Re perception, I don’t have an answer. The Turing test doesn’t suit me; but if the future were populated by cars that maintained themselves and robots that created new robots, and humans all died, would the world still look like it was teeming with life? Yes. It’s not so much whether an experience is perceived, as whether the observer can self-reflect that he or she (or it) is observing. At least, that is our human definition of intelligence. But do dogs and cats self-reflect? And if they don’t, aren’t they still a form of intelligence? Perhaps robotic AI will emerge as a slow haze of gradual self-awareness, with only a little in the beginning.

    Re humanity fading, yep.

  4. Some things like Moore’s Law are at current technology limits. I spent 7 years in the Semiconductor manufacturing Industry and part of the Law is that circuits keep getting smaller. But using copper and silicon are almost at it’s limits. If we move to other forms of circuitry which many are being tested it could mean things move faster or slower.

    I think there is valid points to everything here. I think on one hand we are getting lazier with technology, but we also are doing much more because of technology.

    I might use voice to search for a business listing on my android and refuse to grab a yellow pages. On the other hand I can monitor twitter for my business 24/7 something in the past would of been more 9-5 customer service.

    And what if we can bake Asimovs 3 laws into Robotry to ensure we still rule the machines?

  5. Ben,

    Loved this post, love this content.

    I particularly like how you’re making the argument for changes in degree rather than changes in kind, that we’re already on our way.

    And I like your optimism.

    Thanks,
    -Aaron

Leave a Reply to jordan Julien Cancel reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *