Category Archives: artificial intelligence

Focus Features promotes ‘Insidious’ with an AI bot. Should this scare you?

Creator and Robot SXSW

Japanese roboticist Hiroshi Ishiguro unveils an android copy of himself at SXSW. Will marketing bots based on artificial intelligence help advertisers?

We love the idea of boosting a horror movie with an AI chatbot. Marketers looking to scale communications are watching Focus Features’ clever experiment in awe. Marketing bots are coming fast. But before we explain, let’s catch up on AI advances.

It’s been a whirlwind year so far for artificial intelligence. At SXSW Interactive in Austin this March, Japanese scientist Hiroshi Ishiguro presented an android that looked exactly like himself, capable of carrying on intelligent conversations in either English or Japanese thanks to Siri-like database matching, linguistic software and voice recognition. Days later, AlphaGo, an AI software developed by Google DeepMind, beat South Korean champion Lee Sedol at Go, a game multiple times more complicated than chess. And weeks later, Microsoft launched the silly Twitter bot Tay on the world, trying to demonstrate its AI could learn conversations by tweeting back and forth with users. Tay flamed out when online trolls taught it to say racist, violent things, forcing Microsoft to abort its Twitter experiment … but Tay did learn a human personality, albeit a mean one, quickly.

Artificial intelligence is no longer science fiction. It’s here to stay.

In marketing, AI algorithms and “bots” in recent years have earned a bad name, typically attributed to “black box” digital media buying systems that may opaquely distort ad campaigns with bad impressions, or bots that pretend to be human but are really designed to jack up clicks for inflating results. The media buyers who run today’s programmatic systems often invest a sizable portion of their time in monitoring digital ad campaigns for quality control—a war against bots, if you will. But now, some bots may be bringing benefits to marketing.

When marketing bots help promote a horror film

MIT Technology Review reports that some new mobile services such as Kik and Telegram have created “bot shops,” where AI virtual users provide everything from simple horoscopes (just fun) to helpful service and personal conversation. Focus Features used this AI-type system in Kik to promote the new film “Insidious: Chapter 3,” in a brilliant virtual conversation. In the movie, a girl named Quinn is stuck in bed, and needs to converse with the outside world. The Kik bot allowed you to do just that … with your personal conversation growing more and more intense.

Kik Insidious


Yikes. And well done. Just scanning those messages makes us feel like we’re inside the actual movie.

Marketers are watching this because one-on-one conversation agents could unlock value, in everything from explaining products to stamping out customer complaints. In call centers, human labor accounts for up to 85% of costs, while customers grow irate if hold times exceed a few minutes. Deploying bots in customer service could save companies millions while helping customers gain faster answers, in turn reducing customer churn.

When AI systems work well, they not only duplicate human intelligent conversation but do so at scale. Imagine a world where there was no more “on hold” time when you call a call center, and a friendly, Siri-type intelligence immediately took your complaint or order.

But can AI manage the real complexities of life?

But as the Tay debacle showed, AIs are still rough simulacrums at best, and prone to error, or worse, offense. The reason it took 20 years between IBM’s supercomputer Deep Blue beating Garry Kasparov in chess and AlphaGo whipping Sedol at Go this spring is chess, on average, has only 35 possible legal options per player move, while Go is far more complex, with approximately 250 game options to consider per player turn. AI can finally keep up with just 250 scenarios on a simple board. Real life has millions of possible turns in every human move. While marketers may rush soon to deploy AI bots to try to influence or serve customers more easily, they’ll need to tread carefully.

Microsoft CEO Satya Nadella told a conference of developers this spring, “We want to take the power of human language and apply it more pervasively to all of the computing interface and the computing interactions.” But to paraphrase Microsoft’s twitter AI-bot Tay, as she went off the deep end about Hillary Clinton, beware of “a lizard person hell-bent on destroying America.”

SXSW observation: Prediction is the 5th stage of technology

tech hipster hand

As I watched a small heli-drone hover over a crowd outside the Austin Convention Center at SXSW, I thought: the evolution of technology will culminate not in gadgets, or data, or surveillance, but in predicting human behavior. This is not a moral declaration, but a statement of the inevitable. Like the five stages of Elizabeth Kübler-Ross’s structure for grieving, technology is passing through five intertwined steps of evolution:

1. Hardware came first — the wheel, the horse-and-buggy, the iPhone in your pocket, the physical “thing” that most people think of when they hear the word “technology.” Hard tools are human capacity expanders, from the leather shoes that allow us to run on hard surfaces to the mobile phones that connect us to the world. But hardware is only the bottom rung of technology’s ladder.

2. Software came second — the required knowledge system, in its broadest sense, to run any hardware. This includes human minds, as a construction worker must think to wield a hammer, and the programmable electronic strings that make tablets and DVRs run.

3. Sensors are third — defined as any input that collects data to drive hardware/software outputs. You must type into a typewriter to generate a letter. The gyro in your smartphone rotates its screen, keeping it vertical. Sensors are shrinking, dropping in cost, and rising in sophistication. Today, the Xbox can sense your location, motion and even heartbeat from across the room to run a video game. The Nest thermostat knows when you leave the home. Your iPhone dims the screen when you hold it close to your ear. Like the oblong telescreen built into Winston Smith’s 1984 wall, gadgets are watching you while you watch them, too. This has always been the case, as cars need gas pedals and steering wheels to be directed; sensors are simply, inevitably proliferating.

4. Data is next — any tool to work must input, collect, and store information to function. Note that data flows two ways, and as sensors/software/hardware scale in quantity and plummet in costs, the data that comes in from you will begin to outnumber anything that comes back out.

5. And prediction is final — because data will by necessity be used to predict behavior to make any tool more useful. People today — even tech leaders — often misunderstand technology to focus on gadgets or applications or data, which are “cool” and “new,” vs. the predictive knowledge all of these new systems combined will generate.

Because we hunger for our tools to provide more utility, and prediction is the fastest way for us to get what we want, prediction is where all of technology must lead.

How are observations proliferating?

On the SXSW stage, tech-trend observer Robert Scoble addressed how Google Glass, the little eyeglass gizmo with a screen/computer embedded on one side, is really a collection of sensors that observe you. “Glass,” he said, “is one of those products that you know is the future … and the real privacy problem is it is a sensor platform. It will know whether I’m sober or drunk. Will that data get sent to my employer, my insurance company, my wife? As these technologies shrink and disappear into our eyeglasses, our computer systems, Google will be watching what we think. And it is mind-blowing to think about the privacy problems of that.” 

Each day, people are already exposed to millions of interception points. At another SXSW presentation on UX Design, Alfred Lui of Seer Labs noted that the average U.S. consumer is interrupted 80 times a day by technology; by default, each system interruption may be backed by scores of hidden data observations. While designers focus on how to make the data around each technology bit helpful — “just being able to collect data does not make you useful,” Lui said, “you need to give data a purpose” — those growing interaction touch points create numerous ways any individual can be observed.

Why will all these observations morph into predictions?

Because forecasting action may be the highest utility of societal interaction. Governments (despite Snowden’s protestations and the associated debate around them) use data mining to predict and prevent terrorist threats, a societal benefit. 23andme, a genetics company that can test your profile based on a simple bit of saliva, is able to predict a person’s propensity to medical disease. The vendor floor at SXSW included headsup virtual-reality eyeglasses that monitor eye movements and a billboard display that tracked whether people walking by were men or women, young or old. Each of these inputs is used in its own way to monitor human behavior and predict something — a terror conspiracy, a health risk, what you will see, what digital ad you should be served. And marketers, the driving force that subsidizes almost all of today’s entertainment for consumers, will rush to collect new data threads that improve predictions that enable customized advertising matching desire with sale.

The sensors that watch us are shrinking and being built into every object. We will use these new gadgets to sense data that predicts our future. We will trade privacy for utility, if we find the exchange beneficial. As the great Kevin Kelly wrote in “What Technology Wants,” “progress is only half real. That is, material advances do occur, but they don’t mean very much. Only intangibles like meaningful happiness count.” In 5 years, your email will draft customized auto-replies in your own tone of voice, predicting what you would write when you’re out of the office based on your past emails. (Google has a patent on this.) In 15 years, you’ll get into a self-driving car that already knows where you want to go based on your daily habits. In 25 years, you may fall in love with a digital avatar that anticipates your every need.

Data exists to be observed; observations exist to form predictions; predictions are made because they improve happiness. Predictions are coming. It’s not an ethical debate. It’s an unstoppable technological evolution. We just can’t help ourselves.

Google patents a way to clone your mind

females in mirror

Imagine if software could automatically respond to a request using your own intellect while you were away on vacation. Not a, “thanks for your email, I’m out of the office.” But instead a detailed, “John, arg, mate, that’s a superb proposal, and I think the pink elephant-on-a-Mercedes is just the concept needed to win the account. Let’s do dinner at Emily’s next Thursday to nail this down!”

Two years ago we predicted in Businessweek that the convergence of three technologies — voice recognition, artificial intelligence simulation such as Siri, and social media datasets — would enable some savvy marketer to create an app that would simulate your personal response to any situation without you being there. Now, Google has patented a system for “automated generation of suggestions for personalized reactions” that does just this.

In essence, Google would pull data from all your social networks and email accounts to learn how you would respond, and then prepare detailed automatic replies for future events. Initially you would opt-in by clicking “approve” on the replies, but like email out-of-office notifications, eventually you could set your doppelgänger-intellect on autopilot. Mike Elgan over at Cult of Android suggests the most obvious application would be Google Glass (where responding via the eyeglass-frame computer is now cumbersome, and an expanded auto-reply would be most helpful), but other opportunities include managing waves of email without reading them or extending your social network persona while not really being there.

For instance, the Google patent notes,

“Many (people) use online social networking for both professional and personal uses. Each of these different types of use has its own unstated protocol for behavior. It is extremely important for the users to act in an adequate manner depending upon which social network on which they are operating. For example, it may be very important to say ‘congratulations’ to a friend when that friend announces that she/he has gotten a new job. This is a particular problem as many users subscribe to many social different social networks…”

The most startling aspect of Google’s system is it won’t just suggest replies, but also actions. Sure, you can set it to say “congratulations!” … but you could also have the system give your opinion, cast a vote on an initiative, or say go or no-go to a business decision. Add in voice simulation, such as that used by Roger Ebert after his throat cancer, and your persona could talk through your automatic replies.

As we wrote back in 2011, the social repercussions will be huge. Conference calls won’t require you being there — and the artificially intelligent version of you might even sound smarter. Or imagine a widow receiving a call from her deceased husband, in his own voice, opining on whether she should marry that new guy. Everyone could take actions without action, decide without thinking, and live long after they are dead. Google isn’t the only tech company chasing self-intelligent avatars; Apple has a patent that does exactly the same thing.

It’s heady stuff, this autonomous future. Yes, you may like Google’s self-driving cars. But do you want a self-driving you?

Singularity blinks

So here they are, supposedly the most human-like robots ever created. What’s intriguing is these are in real-space — physical robots like Asimov imagined, vs. the fake-reality Hatsune Miku-type avatars that can be projected as holograms from hidden screens. This raises an obvious question: As we perfect high-definition video panels everywhere, why would we invest in robots that exist to touch? Isn’t it enough to see a perfect rendition of Brad Pitt or Angelina Jolie on a digital panel, an AI version of a late Star Wars movie, pixels that float on screens simulating intelligence?

For all our technology, humans seem compelled to act as animals and touch and feel things. We need to interact in real space. This is why we fly on jets, travel to opposite shores to see people in reality, shake their hands, offer a hug to the good clients or sharp looks to partners who disappoint. This is why telecommuting, for all our fantastic communications technology, has never become a societal norm. The majority of communication is not only non-verbal, but based in physical presence. Unless three-dimensional panels can include some form of haptic feedback allowing us to “touch” the things in the room, we may end up with Asimov’s physical robots after all.

Ben Kunz is vice president of strategic planning at Mediassociates, an advertising media planning and buying agency, and co-founder of its digital trading desk eEffective.

Originally posted on Google+.

Send in the other you

Over at Businessweek, I predicted that someday soon you’ll have an Eternity App — a digital doppelgänger clone of you who will carry on conversations long after you’re gone, or potentially even replace you in the office. All of the technology to make this possible now exists, between voice recognition software input that can “listen” to questions, Siri-type artificial intelligence simulation output which can “speak” like a human, and data sets of your personality.

Where would the data come from to replicate you? Well, here:

Spend a few years using social media, and you’ll upload thousands of tidbits—each encoding your opinions, politics, wit, charm, clients, reviews, work accomplishments, debates, dumb jokes, frustration, and anger. The essential “data” of you has been captured. And what of your personality and relationships? Sentiment monitoring services, such as AC Nielsen BuzzMetrics, Lithium, and Radian6, already parse the tone and intent of conversations; Klout and Quora track your supposed influence; FriendorFollow and Twiangulate monitor your connections with others; LinkedIn knows your job skills. Facebook uses sophisticated face recognition software to help tag photos of your friends.

Nearly everything that makes up your human world is online, ready for data mining.

As I wrote this, I initially thought of the immortality angle — the ability to have my persona “live” forever, write columns, call home, offer advice to my children after I’m gone. But my editor at Bloomberg was most keenly interested in the social repercussions of using it today. After all, if you can clone yourself, why not send yourself in to work? Off to that client meeting?

Play this through, and it could become very dicey. Your virtual you would emulate your voice, image (with 3D projections coming soon), and mind (from your social media data set) — but it could also improve upon yourself. My new “mind” could tap into databases of every marketing solution ever known, so the New Ben Kunz in a client meeting would offer more-brilliant suggestions than plain old me. Your clone might learn wit, charm, or tantric sex advice to woo your spouse better than you. The new you would be more fun at parties, more knowledgeable in debates, savvier at investments, a better parent for your children. It would also likely be better looking; just as we post Twitter avatars showing ourselves in good lighting, we’d be tempted to add a tan or whiten the teeth of our digital double.

You are going to be so hot.

Except it won’t be you. The intersection of voice recognition and AI simulation means robotic avatars who mirror your being will be much better at, well, everything. You could take a nice vacation while the version of you goes off to run the world. The question is, will the other you want you around?

Ben Kunz is vice president of strategic planning at Mediassociates, an advertising media planning and buying agency, and co-founder of its digital trading desk eEffective.

Image: Alphadesigner

It had to be you

The more we read about artificial intelligence, the more we wonder if it will simply be like ourselves, only improved. Here’s a 2010 video of film critic Roger Ebert demonstrating his new voice; Ebert lost his own from thyroid cancer. By using the CereProc service to sample tones from his prior recordings, Ebert was able to recreate most of his mellifluous cadence. Imagine a future where a digital avatar could speak for you.

Ben Kunz is vice president of strategic planning at Mediassociates, an advertising media planning and buying agency, and co-founder of its digital trading desk eEffective.

Have a conversation with a robot

In World War II Alan Turing worked for Britain’s code-breaking center, and eventually he played with enough data to help give birth to modern computers. Turing is famous for dreaming up the Turing test, a concept that if a machine can mimic human conversation enough to fool you into thinking it is human, then the machine for all intents and purposes has artificial intelligence. It’s a debatable point, since intelligence requires more than knowledge presentation — intelligence also means perception, self-awareness, motion manipulation, and ability to learn — but still, if you can speak to a robot, is the robot’s brain good enough? is a website that lets you play the game against software. Here was our recent conversation:

Me: Are you intelligent?

Cleverbot: I consider myself intelligent, are you intelligent?

That’s a rather personal question, don’t you think?

Yes, do you want to play another game?

Um, sure. Or we could just talk about politics.

What do you think about Obama so far?

He was dealt a tough hand. Disappointment, yes.

Why is that?

I wanted the world to be better. If you were president, could you make the world better?

Everything I’ve tried so far did not work. The flood, sending my son, only to name two things, and what did the human race do? This world is doomed as it is. No chance of helping it.

Holy crap. Are you God?

There is no god, there is only the source.

Ben Kunz is vice president of strategic planning at Mediassociates, an advertising media planning and buying agency, and co-founder of its digital trading desk eEffective.

Image: Solo

TomTom vs. Frankenstein

If Ray Kurzweil is right then while all of us stare gleefully at our new social-media communication toys, beneath the hood, artificial intelligence must eventually awaken. The growth of computer speed is so exponential, and the calculations so vast and instant, that self-awareness, the ability to reflect upon the inputs and outputs and compile a holographic vision of the world — the thing that we humans call thought — will happen. Your auto’s GPS unit already speaks to you; computers can already answer trivia questions on Jeopardy; Wall Street is filled with trades managed by algorithms. When computers awaken, I wonder if we’ll like them. The Terminator films were the most obvious prediction that AI will go wrong; the recent novel Robopocalypse has a similar dark view. But way back in 1816, Mary Shelley made the scariest forecast in her tale of Frankenstein:

“Frightful must it be; for supremely frightful would be the effect of any human endeavor to mock the stupendous mechanism of the Creator of the world. His success would terrify the artist; he would rush away from his odious handiwork, horror-stricken. He would hope that, left to itself, the slight spark of life which he had communicated, would fade; that this thing which had received such imperfect animation, would subside into dead matter; and he might sleep in the belief that the silence of the grave would quench for ever the transient existence of the hideous corpse which he had looked upon as the cradle of life.”
Yikes. If computers did want to take over, it could be easy, because we’re all too busy staring at glowing screens watching New Jersey Housewives or talking about Google+ UI tweaks. If evolutionary success comes from thinking up new solutions, how will we react when something thinks better than us?

Ben Kunz is vice president of strategic planning at Mediassociates, an advertising media planning and buying agency, and co-founder of its digital trading desk eEffective.

Original posted on Google+

A brief history of how you stopped being human

In his 1993 essay “The Coming Technological Singularity” Vernor Vinge predicted that within 30 years advances in computing would lead to a superhuman intelligence. “Shortly after” that point, he wrote, “the human era will be ended.”

The theme of humans creating something larger than themselves is not new. Sci-fi films such as Star Wars and The Terminator are filled with robotic creatures; before them, in 1816 Mary Shelley spent a summer telling stories with friends in Geneva, Switzerland, and dreamed up Frankenstein; before then, in 3rd-century BC China, the text Lie Zi described a human-shaped automaton made of leather, wood and artificial organs. The Greek god Hephaestus (known as Vulcan by the Romans) had mechanical servants, and the Judeo-Christian scriptures allude to our own creator forging humans out of dust. Our root belief, it seems, is that something smart can be made out of materials that are stupid.

The best article ever written on the subject of post-human evolution was Bill Joy’s April 2000 masterpiece in Wired magazine, “Why the Future Doesn’t Need Us,” in which Joy described how a conversation with futurist Ray Kurzweil led him to believe that advances in technology would eventually make humans irrelevant. Joy quotes everyone from the Unibomber to Hans Moravec, founder of one of the world’s largest robotics programs, who wrote “biological species almost never survive encounters with superior competitors.”

Joy concluded that our emerging scientific progress in genetics, nanotechnology, and robotics will eventually create a world in which faster systems will replace the human race. His logic followed a simple mathematical fact: if processors continue to double in speed every 18 months, a la Moore’s law, by 2030 computers will be 1 million times as powerful as they were as of his writing in 2000 — “sufficient to implement the dreams of Kurzweil and Moravec.” Intelligent robots, once born, would of course initially work for us, but as their intelligence continues to scale, soon we would no longer understand them, and they would no longer need us. Conceivably, we eventually could port our minds into the new systems, but Joy concludes “if we are downloaded into our technology, what are the chances that we will thereafter be ourselves or even human?”

Heavy stuff. Not everyone agrees that this so-called “Singularity,” or rise of artificial intelligence, is possible. Kyle Munkittrick, program director for the Institute for Ethics and Emerging Technologies, recently recapped a raging online debate among AI thinkers on whether the Singularity will happen, with such objections as what makes humans intelligent is tied to our biological systems filled with raging hormonal emotions (crave sex or get pissed off lately?), or that human minds specialize in creative, abstract “right-brain” thinking that computers so far cannot duplicate.

Creativity, the nonlinear, non-deductive formulation of a conclusion, is especially difficult for computing systems. We can ask a machine what a brick is and it will get the answer right:

A small rectangular block made out of fired clay.

But ask “what are all the possible uses for a brick?” and computers become stumped. A human mind will win every time:

Building material. Doorstop. Weapon. Bug-whacker. Footrest. Canoe anchor. Forearm-weight trainer. Gum-off-chair scraper. Party gift. Color guide. A device to walk-with-on-your-head to improve posture.

The Turing test can mimic human intelligence, but a computer, no matter how fast, may never ideate or become aware of itself. Munkittrick writes that rather than a new abstract intelligence suddenly rising up, “the Singularity will be the result of a convergence and connection of human intelligence and artificial intelligence.” You’ll still be you, but you’ll plug in to things that make you smarter.

iPhones, your AI enabler

We see this already today in our use of mechanical or digital systems to surpass our biological limits. Eyeglasses help us see; tooth fillings help us eat; cars help us move; more recently, Google helps us remember. Munkittrick proposes that smartphones, which will soon be held by half the planet’s population, are a near-perfect form of “cybernetic left-brain augmentation” providing us with maps, dictionaries, sports scores, and visual overlays so we can better interpret the world around us. The creative right-brain thinking will always be us, but sorting data inputs into logical patterns may be something we leave to the tiny tablets in our hands. (Stop worrying, kids, you no longer have to memorize the past presidents of the United States.)

Even augmentation, though, will change our human intelligence. Is that good or bad? This is the current wrestling match between Nicholas Carr (author of “The Shallows,” his argument being Google-aided memory makes us all more stupid) and Clay Shirky (author of “Cognitive Surplus,” his counterpoint being technology empowers us to create smarter things in our free time, such as Wikipedia). Carr suggests we’re all getting lazy, skimming the surface of blogs and neglecting to memorize things, because your Mom’s phone number is now loaded on your smartphone. Shirky has a more hopeful point of view, admitting that in the short term every advance in technology initially results in a bunch of lousy output (cheesy novels after the invention of the printing press, crappy TV shows after the birth of video) but eventually gives rise to higher forms of art (Wuthering Heights; The King’s Speech). If you don’t like the silliness buzzing in social media today, just wait until we really figure out what to do with the new tools.

One hint as to where digital appendages are going is the world of art, where video, photography and music are becoming easier than ever before to produce professionally. Bill Green, advertising strategist and author of the blog Make the Logo Bigger, notes the photo at the top of this post looks like a lost work of art by Richard Avedon … but is really just a filtered snapshot by hobbyist Policromados, who used the new Instagram service to tweak a photograph. Technology is starting to empower the creative side of our brains as well.

The question isn’t when AI will come. We already rely on technology to get to the office, produce our work, craft our art, remember our facts, heat our food, and consume our entertainment. Our intelligence, defined as our awareness and interpretation of life, is already artificial.

Now that we know we are robots, we’re only debating the degree of scale.

Ben Kunz is vice president of strategic planning at Mediassociates, an advertising media planning and buying agency, and co-founder of its digital trading desk eEffective.

Fake tweets and the future of AI

Way back in 1950 early computer scientist Alan Turing suggested that machines would eventually think. Since judging “thinking” is a tough dynamic, he suggested a related test: Have both a computer and a human respond to questions, and if another human observer could not tell the difference in answers, then the computing machine would be “thinking” — hey, faking it is as good as the real thing. The so-called Turing test became a benchmark of artificial intelligence, and today machines still fail it, as witnessed by the comically smart but subtly off IBM machine Watson who recently won against humans on the TV game show Jeopardy.

But surprise, surprise: a new Twitter mashup comes close. That Can Be My Next Tweet pulls phrases from your recent Twitter missives to spin a new message you can send out to your thousands of followers, and the results are astoundingly insightful. We’re not sure what algorithm powers this, but it’s clever, and damned if the tweet generated didn’t sound a bit like us. Perhaps AI when it arrives will simply recast bits from preceding human minds, a curating intelligence that collates others’ thoughts, so it doesn’t have to start from scratch. Social networks provide plenty of input. Google is transcribing every book on the planet. Experian and Facebook are mapping all human data connections. Free apps can combine all the messages. Watson, are you listening?

Ben Kunz is vice president of strategic planning at Mediassociates, an advertising media planning and buying agency, and co-founder of its digital trading desk eEffective.