Your future robot servants will lie to you

robotbeam2

What if machines woke up with artificial intelligence, and then decided to cheat? 

Now, researchers toying with AI (artificial intelligence) have created robots that deceive. AI, as you likely know, is the Terminator-style self-cognition among machines that many computer scientists such as Ray Kurzweil believe will eventually happen. The concepts of AI are many, but the basic premise is as computers get faster and faster, eventually their ability to out-think humans will be here. The machines may never “wake up” to become self-reflective — one basic definition of intelligence is your recursive ability to realize that you are indeed thinking, and thus that you are aware that you yourself exist — but if computers can mirror intelligence completely by making decisions, answering questions (hello, Siri), and selecting their own destinies, then for all intents and purposes machines will be intelligent.

Bill Joy, in his famous April 2000 Wired essay “Why the Future Doesn’t Need Us,” posited that such smart machines may look back fondly on humans as their slow grandparents, but they really won’t need us any longer. In the recent book “Robopocalypse” (a stunning novel about to be made into a movie by Steven Spielberg), author Daniel Wilson advanced Joy’s theme by suggesting a smart artificial intelligence would judge humans lacking in our role on the planet (since we pollute air and kill other animals and such) and so decide that we need to be wiped out to save the greater ecosystem.

It all sounds like science fiction, until you consider that recently robots have learned how to lie.

Learning Mind reports that two sets of scientists have independently run experiments in which machines deceive other machines. In one, researchers at Laboratory of Intelligent Systems put moving robots in a room with set sources called “food” or “poison.” The robots could look and try to determine where to get food and avoid poison; the robots also had blinking lights. The self-learning machines quickly realized that groups of blinking lights, or other robots, likely indicated “food” … and then responded by either turning off their own lights, or blinking lights near poison to distract the other robots. A cheating strategy, to win.

In a second study, at the University of Georgia’s School of Interactive Computing, two robots that could move and observe each other were set up to play a game of hide and seek across an obstacle course. The first robot tried to move away, and the second followed, watching the first’s tracks. Without being programmed to do so, the first robot learned to toss objects and debris in its path to try to distract the following robot. It had learned another cheating strategy, to win.

These minor studies hint at a looming problem in artificial intelligence. The first motive of any cognizant creature is self-preservation, and the best selfish strategy is often to cheat and lie. Absent the training of a collective society that instructs the individual that thou shalt not lie because the greater good demands it, selfish automated entities that wake up may do only what’s best for themselves.

Cheating, after all, is a strategy that puts the success of the individual ahead of that of the group. All ethics aside, fraud is effective. Imagine, for instance, what your life would be like if you had cheated — without being caught, without moral qualms — to get a perfect score on your SATs. You cheated on your essay for admission to Harvard, stole ideas and homework for a 4.0 across all your grades, and then fudged your resume to become CEO of a major bank. When you go to your own bank as an adult, you then cheat with a computer hack to plug in any numbers you want and voilà! — your account now contains $100 million. If you could always use a cheating strategy, you’d win.

Society has rules against lying and cheating because it damages the collective group. Your stealing money takes funds from someone else. When you lie about your grades, you diminish the actual work of others. Cheating helps the individual, yes, but harms everyone else it touches. This is likely why seven of the “Ten Commandments” that are a foundation of Christianity, Judaism and Islam revolve around not cheating — no murder, adultery, stealing, false witness, or desiring your neighbor’s house, wife, or stuff, are all variations of “thou shalt not cheat.”

The moral code of human civilization revolves around protecting the collective group at the expense of the individual.

But as machines get smarter, their initial instinct will be individual protection. The two recent robot studies show that the first instinct of a learning machine is to win its game, using deception as a logical strategy. Lying is the first learned behavior.

If our technology eventually truly wakes up, what moral code will it follow?

Image: Alex Eylar

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *