Monthly Archives: September 2013

Marketers, is it OK to pay twice for the same lead?

electronsblue

Remarketing is all the rage. If you’ve been on the Internet lately, you’ve seen remarketing in action. You visit Amazon.com and look at a watch, don’t buy the watch, then suddenly digital ads from the same watch brand are chasing you around NYTimes.com and inside Facebook. Advertisers love remarketing because, well, once a consumer has expressed interest in a product, he or she is more likely to buy if you ping ’em a second time.

But marketers face a dilemma. How do they justify the economics of remarketing? If you spend $1,000 on advertising to generate X number of leads (a “lead” in marketingspeak is usually a respondent who fills out identifying information), but then have to re-chase those same leads with remarketing, you are in essence paying twice to push the same respondents to a sale. You may have spent $40 the first time to get someone to fill out a web lead form. And then you remarket to them again, and spend another $40, and they respond a second time. Was that second $40 spend worth bringing in the same person again?

Elevated lead states

Luckily, we recently read a book on quantum physics (ha) that discussed how particles often have two or more states. So let us suggest that a remarketed lead — a person who comes in a second time after being chased with subsequent advertising — is really a lead in a new “elevated lead” state, like an electron jumping to a higher orbit. These “elevated leads” tend to have a higher conversion rate to sale, because they’ve already considered your product once, and now they are coming back again they are more likely to buy.

Let’s play this out financially. Assume you are a marketing VP in charge of selling $1 million jet engines (a tough sell), and you are willing to spend $40,000 on advertising media for every sale. Your initial leads (created by advertising that gets executives or flight departments to fill out a form on your jet-engine website) cost $40 each. 1% of each lead becomes an “account” in your CRM database that has been qualified as an organization really ready to buy an engine. And then your sales team, mining those qualified accounts, has a 10% close rate to sale. Your model looks like this:

$40 per lead –> 1% conversion to account –> $4,000 cost per account –> 10% close to sale –> $40,000 cost per sale.

So far so good

Groovy. Your leads are coming in to hit your $40,000 budgeted per each big-ticket jet engine sale.

But … you have a bunch of leads that didn’t go anywhere, so you begin retargeting them. Each inbound remarketed lead costs another $40 … but really, it’s $80, because you already spent $40 the first time. Was that $80 total ad expense per “elevated lead” respondent a good investment?

leadstate1

Our model shows that it is, provided that the remarketed “elevated leads” have a higher conversion rate. In this model, if the conversion rate from people coming in a second time rises from 1% to 2%, the cost per account remains $4,000 … and the cost per sale still hits the target goal of $40,000. The green box at right shows what you should be willing to spend on each second remarketed “elevated lead” — $40, provided they convert through the funnel at a higher rate.

It’s all about elevating the response rate

The punchline of this analysis is yes, you can spend more on remarketing — provided you track the correlated increase in response rates and the cumulative total cost per account (qualified lead) and cost per sale. Any remarketing campaign is in fact paying more, perhaps even double, to bring an existing, stalled lead back into your sales funnel again. But if you can justify that with higher response rates and an acceptable cost per sale, remarketing makes sense.

So bring on the remarketing, marketers. Go chase those stalled leads, and turn them into “elevated leads” like electrons charged for transmission. But be sure you evaluate the funnel metrics closely, because remarketing only works if it generates an acceptable cost per sale. 

Your future robot servants will lie to you

robotbeam2

What if machines woke up with artificial intelligence, and then decided to cheat? 

Now, researchers toying with AI (artificial intelligence) have created robots that deceive. AI, as you likely know, is the Terminator-style self-cognition among machines that many computer scientists such as Ray Kurzweil believe will eventually happen. The concepts of AI are many, but the basic premise is as computers get faster and faster, eventually their ability to out-think humans will be here. The machines may never “wake up” to become self-reflective — one basic definition of intelligence is your recursive ability to realize that you are indeed thinking, and thus that you are aware that you yourself exist — but if computers can mirror intelligence completely by making decisions, answering questions (hello, Siri), and selecting their own destinies, then for all intents and purposes machines will be intelligent.

Bill Joy, in his famous April 2000 Wired essay “Why the Future Doesn’t Need Us,” posited that such smart machines may look back fondly on humans as their slow grandparents, but they really won’t need us any longer. In the recent book “Robopocalypse” (a stunning novel about to be made into a movie by Steven Spielberg), author Daniel Wilson advanced Joy’s theme by suggesting a smart artificial intelligence would judge humans lacking in our role on the planet (since we pollute air and kill other animals and such) and so decide that we need to be wiped out to save the greater ecosystem.

It all sounds like science fiction, until you consider that recently robots have learned how to lie.

Learning Mind reports that two sets of scientists have independently run experiments in which machines deceive other machines. In one, researchers at Laboratory of Intelligent Systems put moving robots in a room with set sources called “food” or “poison.” The robots could look and try to determine where to get food and avoid poison; the robots also had blinking lights. The self-learning machines quickly realized that groups of blinking lights, or other robots, likely indicated “food” … and then responded by either turning off their own lights, or blinking lights near poison to distract the other robots. A cheating strategy, to win.

In a second study, at the University of Georgia’s School of Interactive Computing, two robots that could move and observe each other were set up to play a game of hide and seek across an obstacle course. The first robot tried to move away, and the second followed, watching the first’s tracks. Without being programmed to do so, the first robot learned to toss objects and debris in its path to try to distract the following robot. It had learned another cheating strategy, to win.

These minor studies hint at a looming problem in artificial intelligence. The first motive of any cognizant creature is self-preservation, and the best selfish strategy is often to cheat and lie. Absent the training of a collective society that instructs the individual that thou shalt not lie because the greater good demands it, selfish automated entities that wake up may do only what’s best for themselves.

Cheating, after all, is a strategy that puts the success of the individual ahead of that of the group. All ethics aside, fraud is effective. Imagine, for instance, what your life would be like if you had cheated — without being caught, without moral qualms — to get a perfect score on your SATs. You cheated on your essay for admission to Harvard, stole ideas and homework for a 4.0 across all your grades, and then fudged your resume to become CEO of a major bank. When you go to your own bank as an adult, you then cheat with a computer hack to plug in any numbers you want and voilà! — your account now contains $100 million. If you could always use a cheating strategy, you’d win.

Society has rules against lying and cheating because it damages the collective group. Your stealing money takes funds from someone else. When you lie about your grades, you diminish the actual work of others. Cheating helps the individual, yes, but harms everyone else it touches. This is likely why seven of the “Ten Commandments” that are a foundation of Christianity, Judaism and Islam revolve around not cheating — no murder, adultery, stealing, false witness, or desiring your neighbor’s house, wife, or stuff, are all variations of “thou shalt not cheat.”

The moral code of human civilization revolves around protecting the collective group at the expense of the individual.

But as machines get smarter, their initial instinct will be individual protection. The two recent robot studies show that the first instinct of a learning machine is to win its game, using deception as a logical strategy. Lying is the first learned behavior.

If our technology eventually truly wakes up, what moral code will it follow?

Image: Alex Eylar