Monthly Archives: July 2011

Google+ and the bridges of Königsberg

Brand strategist Gunther Sonnenfeld brought up a classic mental puzzle this week. Look at the bridges above; now, find a walking path through the city that crosses each bridge only once. This is an actual map of the city of Königsberg (now Kaliningrad in Russia), and the problem vexed mathematicians until Leonhard Euler solved it in 1735, doing this:

The answer was “no,” you can’t cross each bridge only once. Euler showed if you reduce the problem to a simple graph, with dots or nodes representing the land and the lines the bridges, you can mathematically determine that every entry into a bridge is canceled out by an equal exit (+1 then -1), so the number of bridges touching a landmass must be even for a walker to get on and off (because, if you can only cross a bridge once, the +1 and -1 entry and exit effectively vaporize that bridge behind you when you are done crossing it). Since all possible land connections in the Königsberg map end up odd, not even bridge links, trotting across each bridge only once is impossible.

Sonnenfeld suggests this logic scenario is like Google+ creating better standards for sharing information (crossing bridges, if you will, represents the links between our human nodes). We agree. At first, we’ve been annoyed by the hoopla over Google+ and the constant posts over there about how great this or that feature is. But now, we’ve realized Google has solved a simple problem:

1. People like social media because they can pause, reflect, and think more deeply before sharing. The few seconds it takes to write something allows us to compose our thoughts more cleverly, funnily, or wisely.

2. Yet all this typing and passing of links, photos and videos creates enormous information clutter.

3. So we all need better filters. We want to share more, and yet ironically need to consume less.

Google+ has solved our mental “sharing path” problem by simplifying our bridges, and helping us filter out the city-like noise. This new solution may not stick; all communication networks eventually grow, attract spammers and silliness and eventually a tide of data pollution that causes users to move on to the next new thing. We’ve seen this with fax machines (remember fax marketing?), telephones (telemarketing killed by the Do Not Call list), email (spam vs. spam filters), Facebook (and annoying FarmVille game updates) … and for now, Google+ remains clean. There’s a clear path ahead. The challenge for Google will be to not add too many new bridges and muck these paths up.

(Sonnenfeld takes the issue deeper, his post is worth a read.)

Ben Kunz is vice president of strategic planning at Mediassociates, an advertising media planning and buying agency, and co-founder of its digital trading desk eEffective.

TomTom vs. Frankenstein

If Ray Kurzweil is right then while all of us stare gleefully at our new social-media communication toys, beneath the hood, artificial intelligence must eventually awaken. The growth of computer speed is so exponential, and the calculations so vast and instant, that self-awareness, the ability to reflect upon the inputs and outputs and compile a holographic vision of the world — the thing that we humans call thought — will happen. Your auto’s GPS unit already speaks to you; computers can already answer trivia questions on Jeopardy; Wall Street is filled with trades managed by algorithms. When computers awaken, I wonder if we’ll like them. The Terminator films were the most obvious prediction that AI will go wrong; the recent novel Robopocalypse has a similar dark view. But way back in 1816, Mary Shelley made the scariest forecast in her tale of Frankenstein:

“Frightful must it be; for supremely frightful would be the effect of any human endeavor to mock the stupendous mechanism of the Creator of the world. His success would terrify the artist; he would rush away from his odious handiwork, horror-stricken. He would hope that, left to itself, the slight spark of life which he had communicated, would fade; that this thing which had received such imperfect animation, would subside into dead matter; and he might sleep in the belief that the silence of the grave would quench for ever the transient existence of the hideous corpse which he had looked upon as the cradle of life.”
Yikes. If computers did want to take over, it could be easy, because we’re all too busy staring at glowing screens watching New Jersey Housewives or talking about Google+ UI tweaks. If evolutionary success comes from thinking up new solutions, how will we react when something thinks better than us?

Ben Kunz is vice president of strategic planning at Mediassociates, an advertising media planning and buying agency, and co-founder of its digital trading desk eEffective.

Original posted on Google+

Spider-Man renewed and the novelty effect

So if the first Spider-Man film with Tobey Maguire came out only nine years ago, why in the world is Sony redoing the same Spidey 1 plot — this time, with buffer actor Andrew Garfield — set for release in summer 2012? Is our culture completely out of ideas?

Sony is actually making a clever move, rebooting what was a profitable franchise to include more grit, sex, and videogame offshoots that appeal to older demos. The challenge is how to manage the novelty effect, or the tendency of humans to respond more strongly to something that is new.

In psychology, the novelty effect is the heightened response humans have — in terms of stress, anticipation, or pleasure — from something new. Through our Darwinian ancestry we survived based on novelty; men who sought more mates were most likely to pass their genes on; women who invented communication charms were more likely to get those unfaithful men to stick around and help protect the children; clans who ate diverse foods and built new tools were most likely to be healthy and survive storms and wars. Sexual nuance, language, art, cooking, housing, and automotive sheet metal designs all grew out of our need for new things to survive.

The novelty effect is why Google+ seems so amazing, when it is really a slight rehash of Facebook, Twitter and Skype. It’s why your iPhone 4 looked so incredible last year, and why you’ll want to toss it aside when Apple launches a next-gen phone with a bigger touchscreen and no clunky home button. Novelty is why we sit through stupid films such as Transformers or Captain America with little new plot, because there are new explosions to see.

To test this idea, we asked a teenager to review the new Spider-Man trailer above. He said, “yes, the plot is the same — but check it out! Now, when Spider-Man flies, we’ll watch it from the first-person viewpoint, all in 3-D!”

Ben Kunz is vice president of strategic planning at Mediassociates, an advertising media planning and buying agency, and co-founder of its digital trading desk eEffective.

Facebook should have seen it coming

In the past two weeks Facebook must have been shocked to see Google+ attract 10 million users and apparently be on track to double that number by the end of July. It’s a stunning achievement, yet the signs of weakness have been there: Facebook’s cumbersome interface, constant stumbles over privacy, and the inability to get users to adopt new Facebook services such as Messages (do you remember you have a email?). Facebook like many businesses grew fat and happy with its wonderful business model, neglecting the cracks that gave Google an entry point.

Students of Michael Porter know you can fit all of competition into five boxes: You with your competitors in the middle; customers who buy from you and suppliers who help you build what to sell; and the dastardly product substitutes and market entrants. The current slow demise of the publishing industry is modeled above, with the lock-keepers of content being eaten up by substitutes (the Internet and Google search) and entrants (social media, mobile and tablets).

In July, Facebook became an old-school publisher and Google became a surprising market entrant. This is nothing new: Google+ is doing to Facebook what Wikipedia did to Encyclopedia Britannica, the Prius did to the VW Bug, Napster to the music industry. When you least expect it a competitor pops up with a revolutionary idea … but if you think about it, the new idea is always based on flaws in the old business model. Facebook could have avoided the G+ explosion if it had overhauled its design to address privacy concerns and made Facebook Groups something as easy as Circles to manage. Instead, it left Groups buried and difficult to use. Why change, if things are going so well? Oh, um…

It’s a cautionary tale. What gaps have you been neglecting in your business that could allow a competitor to take everything away from you?

Ben Kunz is vice president of strategic planning at Mediassociates, an advertising media planning and buying agency, and co-founder of its digital trading desk eEffective.

A brief history of how you stopped being human

In his 1993 essay “The Coming Technological Singularity” Vernor Vinge predicted that within 30 years advances in computing would lead to a superhuman intelligence. “Shortly after” that point, he wrote, “the human era will be ended.”

The theme of humans creating something larger than themselves is not new. Sci-fi films such as Star Wars and The Terminator are filled with robotic creatures; before them, in 1816 Mary Shelley spent a summer telling stories with friends in Geneva, Switzerland, and dreamed up Frankenstein; before then, in 3rd-century BC China, the text Lie Zi described a human-shaped automaton made of leather, wood and artificial organs. The Greek god Hephaestus (known as Vulcan by the Romans) had mechanical servants, and the Judeo-Christian scriptures allude to our own creator forging humans out of dust. Our root belief, it seems, is that something smart can be made out of materials that are stupid.

The best article ever written on the subject of post-human evolution was Bill Joy’s April 2000 masterpiece in Wired magazine, “Why the Future Doesn’t Need Us,” in which Joy described how a conversation with futurist Ray Kurzweil led him to believe that advances in technology would eventually make humans irrelevant. Joy quotes everyone from the Unibomber to Hans Moravec, founder of one of the world’s largest robotics programs, who wrote “biological species almost never survive encounters with superior competitors.”

Joy concluded that our emerging scientific progress in genetics, nanotechnology, and robotics will eventually create a world in which faster systems will replace the human race. His logic followed a simple mathematical fact: if processors continue to double in speed every 18 months, a la Moore’s law, by 2030 computers will be 1 million times as powerful as they were as of his writing in 2000 — “sufficient to implement the dreams of Kurzweil and Moravec.” Intelligent robots, once born, would of course initially work for us, but as their intelligence continues to scale, soon we would no longer understand them, and they would no longer need us. Conceivably, we eventually could port our minds into the new systems, but Joy concludes “if we are downloaded into our technology, what are the chances that we will thereafter be ourselves or even human?”

Heavy stuff. Not everyone agrees that this so-called “Singularity,” or rise of artificial intelligence, is possible. Kyle Munkittrick, program director for the Institute for Ethics and Emerging Technologies, recently recapped a raging online debate among AI thinkers on whether the Singularity will happen, with such objections as what makes humans intelligent is tied to our biological systems filled with raging hormonal emotions (crave sex or get pissed off lately?), or that human minds specialize in creative, abstract “right-brain” thinking that computers so far cannot duplicate.

Creativity, the nonlinear, non-deductive formulation of a conclusion, is especially difficult for computing systems. We can ask a machine what a brick is and it will get the answer right:

A small rectangular block made out of fired clay.

But ask “what are all the possible uses for a brick?” and computers become stumped. A human mind will win every time:

Building material. Doorstop. Weapon. Bug-whacker. Footrest. Canoe anchor. Forearm-weight trainer. Gum-off-chair scraper. Party gift. Color guide. A device to walk-with-on-your-head to improve posture.

The Turing test can mimic human intelligence, but a computer, no matter how fast, may never ideate or become aware of itself. Munkittrick writes that rather than a new abstract intelligence suddenly rising up, “the Singularity will be the result of a convergence and connection of human intelligence and artificial intelligence.” You’ll still be you, but you’ll plug in to things that make you smarter.

iPhones, your AI enabler

We see this already today in our use of mechanical or digital systems to surpass our biological limits. Eyeglasses help us see; tooth fillings help us eat; cars help us move; more recently, Google helps us remember. Munkittrick proposes that smartphones, which will soon be held by half the planet’s population, are a near-perfect form of “cybernetic left-brain augmentation” providing us with maps, dictionaries, sports scores, and visual overlays so we can better interpret the world around us. The creative right-brain thinking will always be us, but sorting data inputs into logical patterns may be something we leave to the tiny tablets in our hands. (Stop worrying, kids, you no longer have to memorize the past presidents of the United States.)

Even augmentation, though, will change our human intelligence. Is that good or bad? This is the current wrestling match between Nicholas Carr (author of “The Shallows,” his argument being Google-aided memory makes us all more stupid) and Clay Shirky (author of “Cognitive Surplus,” his counterpoint being technology empowers us to create smarter things in our free time, such as Wikipedia). Carr suggests we’re all getting lazy, skimming the surface of blogs and neglecting to memorize things, because your Mom’s phone number is now loaded on your smartphone. Shirky has a more hopeful point of view, admitting that in the short term every advance in technology initially results in a bunch of lousy output (cheesy novels after the invention of the printing press, crappy TV shows after the birth of video) but eventually gives rise to higher forms of art (Wuthering Heights; The King’s Speech). If you don’t like the silliness buzzing in social media today, just wait until we really figure out what to do with the new tools.

One hint as to where digital appendages are going is the world of art, where video, photography and music are becoming easier than ever before to produce professionally. Bill Green, advertising strategist and author of the blog Make the Logo Bigger, notes the photo at the top of this post looks like a lost work of art by Richard Avedon … but is really just a filtered snapshot by hobbyist Policromados, who used the new Instagram service to tweak a photograph. Technology is starting to empower the creative side of our brains as well.

The question isn’t when AI will come. We already rely on technology to get to the office, produce our work, craft our art, remember our facts, heat our food, and consume our entertainment. Our intelligence, defined as our awareness and interpretation of life, is already artificial.

Now that we know we are robots, we’re only debating the degree of scale.

Ben Kunz is vice president of strategic planning at Mediassociates, an advertising media planning and buying agency, and co-founder of its digital trading desk eEffective.

The tendrils of art and artificial intelligence

Clay Shirky has noted that every advance in communications technology leads to the unlocking of new human potential, but only after the incumbents beat their breasts that the new tools are destroying civilization and culture. The invention of the printing press and its cheap reproduction costs created a rash of “novels” — the root term means “something new” — works of fiction which we study in college today but, at their first appearance, seemed tawdry trash destined to ruin the minds of shallow readers. At every turn, new communications creates agina In the 1960s parents fretted that television, “the boob tube,” would warp minds; more recently the rise of the Internet in the 1990s and social media in the 2000s have raised the spectre of real human relationships becoming bastardized into superficial, fake threads.

One hint as to where digital tools are going is the world of art, where video, photography and music are becoming easier than ever before to produce professionally. Bill Green, advertising strategist and author of the blog Make the Logo Bigger, notes the photo above looks like a lost work of art by Richard Avedon … but is really just a filtered snapshot by hobbyist Policromados, who used the new Instagram service to filter a photograph.

The long road to personalization

Our friends Bill Green and Alan Whitley at digital shop BFG sent us a Mashable article declaring demographics are dead. The column’s author, Jamie Beckland, raises excellent points that new forms of personal data are more effective for marketing … but stretches too far.

We jotted this email back.

Conceptually I agree that marketers continue to improve targeting, and that psychographics are better than demographics. But, as with any provocative article, this writer takes the case too far, because the theory can’t be implemented usually and demographics, while a broad categorization, are still an effective form of targeting. If you are a mom in your 40s, yes, I’ll run a morning news spot promoting a local hospital, because your demo makes sense, and no amount of psychographic profiling in the world can predict that yikes, you just found a lump in your breast.

Yes, there is waste in such approaches, but advertising is a game of what you catch, not what you spill.

I spent an early part of my career working with Don Peppers, the father of 1to1 marketing, who wrote a book in 1993 titled “The One to One Future” (and spawned the CRM craze of the 1990s which eventually became a term for software after marketers had difficulty implementing it). Don’s idea was that eventually marketing targeting would get so perfect, it would become 1to1 personalized relationships, a feedback loop with every customer. Brilliant idea, but very difficult to implement. When I read people like Joseph Jaffe now claim “the 30 second spot is dead,” I laugh a little, because it’s the same vision 20 years later. It is coming, but slowly, and we’re not there yet.

One of Don’s great thoughts was that “1to1 marketing” – or hypertargeting – works best in certain industries which have
a. Variance in what customers need or
b. Variance in the customer lifetime value to the business

This is why personalization has been implemented best by and Netflix (where Bill Green and I likely have very different “needs” in books and movies), and why differential treatment strategies are implemented by airlines and hotels (where a business frequent flier has 100x the value of a typical vacation traveler). In such industries, investment in customer data systems and corresponding hyper media targeting make sense.

But in other industries with mass appeal, demographic targeting is fine. Insurance is a classic example – State Farm and Geico spend millions on billboards, which is smart, because their products appeal to almost everyone and it is almost impossible to tell when any individual is going into play after a bad experience with their old insurance company.

Psychographics cannot predict customer modality, which is why Netflix personalization is still problematic. I don’t know what movie I want to watch next week, and I’m me.

In terms of the quote that a 1% response rate is bad so traditional advertising doesn’t work, that is ridiculous. As I said, advertising is a game of what you catch, not what you spill. If the math works out on a tiny response rate, at an acceptable cost per acquisition, marketers will throw money at the channel every day of the week. There are 3.5 billion women in the world and I married one – was my personal marketing effort for love and sex a bad campaign? No.

Finally, one major error in this type of prediction is it doesn’t look at how humans actually use media. Internet use is still less than 1 hour a day for most U.S. consumers. The typical U.S. consumer watches 5 hours and 9 minutes of television a day, which works out to exposure to 166 :30 second TV spots each day. People spend hours in their cars, looking at billboards. There are more marketers who want to push a message out than consumers who want to receive them; people still spend huge amounts of time letting mass media wash over them; and personalization just can’t work at that scale (who would possibly respond to half of those 166 TV offers even if they were exactly what you want?). It will be decades before media channels figure out how to implement personalization across such broad media touchpoints.

Personalization is coming and we’ll continue to improve our tools, but as with any idea, the theory is often better than the execution. Pinpoint targeting is a dream, but broad media hammer strokes still work, too. Our recommendation is to try to combine both tools, but certainly not to disregard either one.

Ben Kunz is vice president of strategic planning at Mediassociates, an advertising media planning and buying agency, and co-founder of its digital trading desk eEffective.

Google, stuck in the middle

We won’t bore you with predictions of whether Google+, the new me-too social network, will make it. Instead, we’ll suggest Google faces a classic mass-marketing, mainstream-media problem. Branding.

More than 20 years ago Harvard Business School professor Michael Porter wrote the landmark book “Competitive Advantage” (heard the term? the book coined it) and in it he suggested most companies, as they grow, face a classic challenge. They start out as a niche player — a specialist, something that customers can easily pigeonhole in their mind — and then decide to grow. A consulting boutique decides to take on McKinsey; an online bookstore decides to sell all forms of goods. Some companies, such as Peppers & Rogers Group (1to1 consultants), fail to make the transition; others, like (once only online books), leap over the confusion to grow to the next level. Porter suggested the challenge in this transition is to not become “stuck in the middle.”

Stuck in the middle is what happens when you lose your focus, and your correlated customer brand perception of you, and become nothing to no one as you try to grow to the next level. You aren’t perceived as the next big service; yet your customers forget you used to be special. It’s exceedingly difficult to make the leap from specialist to market leader not due to your product-design aptitude, but instead because most consumers don’t give a damn about most products. We all have limited attention spans as customers, and it’s easiest to keep companies, like people, in their place. If you were once known as the maker of Product X, like a girl or guy in high school who gets a bad rap, customers may remember you as That Product X forever.

When a company such as Google, known as the leader in online search, tries to become a social network, human beings get confused. Google+ has hoards of features emulating Facebook, GroupMe, Instagram, and Twitter. So we go hmm. “Where do you fit in this complicated, crowded space?” we ask. We already have locks on where Facebook and Twitter fit. They are snapshots in our mind on the scale of private-to-public, boring-to-fun. Instagram, a newcomer, has rapidly gained traction by carving out a unique position. LinkedIn, the cold porridge of social media, is no fun to visit but you know you need to post your resume there. In the world of communications, specialists rule.

Google+, however, faces a basic challenge. We already know who Google is. Google, we love you, we use you daily … and unfortunately, you’re stuck in the middle.

Bonus points: If you’re not familiar with Michael Porter, get started.

Ben Kunz is vice president of strategic planning at Mediassociates, an advertising media planning and buying agency, and co-founder of its digital trading desk eEffective.