Category Archives: artificial intelligence

Forget Auto-Tuning. This entire sweet singer isn’t real.

Enjoy this singer. She is Hatsune Miku, a pop star hit in Japan, and none of her is real.

A year ago we noted: Perfection of human artifice was bound to happen sooner or later. For decades, animators have struggled to overcome The Uncanny Valley effect — the disturbing vibe you get watching animated faces that don’t look quite real. German psychologist Ernst Jentsch coined the term in 1906, as as we’ve written before, most “human” animation attempts such as the Tom Hanks’ characters in 2004’s Polar Express are as eerie as walking through a wax museum at night. The eyes are dead; the faces look ghastly; we don’t believe it is real. But now, fake reality is here.

The video above shows how the evolution of artificial intelligence continues. Hatsune Miku is a projection on a transluscent screen, that looks almost like 3-D, with a voice synthesis engine by Yamaha that takes prerecorded vowel and consonant sounds from an actress and morphs them into any language or song. The illusion is a sentient being. AI, of course, may never happen, but if a machine can emulate humans enough that no one can discern the difference, Miku passes the Turing Test, and the fiction of machine intelligence will become reality.

That day will come, because it already is here. You’re reading this on a computer or tablet or handheld right now, a glowing window that gives you fake views of a world of other people and their ideas that are artificial evolutions of real speech and writing, themselves coded versions of thoughts. You do this for money, itself a system of artificial 1s and 0s that get pushed like invisible blood from the heart of commerce to the cells of your bank computer. The entire universe of communications is a series of codes that layer over each other, like growth of an onion, covering up the seed at the core. As humans learn to enhance their disguises and add additional complexity to the ideas they trade with the world, eventually machines may take those layers and play reality themselves. Who’d have thought they’d also be wearing sexy miniskirts?

Via Brandflakes.

The hive mind of Apple desire


You, dear reader, are an ant that is part of a much larger colony — a hive mind, or what artificial intelligence designers call a “swarm intelligence.” This is what happens when you see flocks of birds, each acting alone according to simple rules (fly fast, don’t bump into each other) swarm instantly in new directions. We observe group awareness in insects and in schools of fish. You might think that evolved humans are above such collective behaviors, but an observer of the financial markets or a passenger aboard a plane flying into JFK can see groups of humans moving masses of resources to terraform our planet. Really, people: We’re only 4 billion years into the 10 billion-year life cycle of the Earth’s sun, so if you think humans are the apex of evolution, you’re wrong by about 60 percent. As animals do, so do we.

So if we assume hominids act in groups like all other animals, and that all large groups of creatures make intelligent collective decisions to protect their species, and thus our society has a collective consciousness, what can we make of the fanfare of speculation about the Apple Tablet? Why, that 2010 humans are making a prediction that a new device will fill several gaps in our societal infrastructure: Our ability to consume content, move ourselves, broadcast to others, and salvage the struggling publishing and advertising industries. The Apple hyperbole newsgroup is judging as a whole that our peripatetic culture is about to receive a missing tool, a device that connects the world more easily.

This is more than thinking Apple will make a good product — what marketing scholars Raquel Castaño, Mita Sujan, Manish Kacker, and Harish Sujan have called a cost-benefit consumption analysis. Hive minds act as prediction markets, making decisions not on what they think will happen, or what they believe others think will happen, but what they think others think still others think will happen. Society, like a savvy investor, is three steps removed from logic, trying to game the future that will play out among all the other players. We’re like single spectators in a bar guessing who will hook up with whom to better our own odds. Society is judging the tablet as something that others think everyone will find useful.

And what is that? A future world in which panes of glass make true communication — sight, sound, video, text — portable at last. When tomorrow’s Apple Tablet is remembered a decade from now as the first real effort at portable screens — in 2020, when such panes cost $20 and are in every schoolchild’s backpack — we may look back and laugh. But it portends a future when the Internet has come unbound and unwired, where two-way video is everywhere, where information is finally at every fingertip, when you can cast your own face to your social network anywhere. Don’t trust us. The hive mind of hyperbole says it must be true.

Image: Toastforbrekkie

Digital Emily and the future of your flesh

Paul Debevec, the digital effects star behind The Matrix and The Curious Case of Benjamin Button, here shows off the latest evolution of animation. The fake Emily looks so real (fast-forward the video to minute 5:00 to see the technically constructed, moving face) that you can no longer tell the difference between computer graphics and reality.

Perfection of human artifice was bound to happen sooner or later. For decades, animators have struggled to overcome The Uncanny Valley effect — the disturbing vibe you get watching animated faces that don’t look quite real. German psychologist Ernst Jentsch coined the term in 1906, as as we’ve written before, most “human” animation attempts such as the Tom Hanks’ characters in 2004’s Polar Express are as eerie as walking through a wax museum at night. The eyes are dead; the faces look ghastly; we don’t believe it is real. But now, fake reality is here.

Untrue faces may mask the truth

What happens to the world of communication when computers can post illusions of humans, who don’t exist, saying anything the controllers behind the scenes want? The first application is obviously video games, such as Quantic Dream’s “Heavy Rain,” but imagine fake faces intersecting with social media and a computer script that could pass the Turing Test. The possibilities are endless. A company could create a fake public relations spokesperson, as verbally gifted as Scott Monty with the sex appeal of Angelina Jolie. We could elect politicians who don’t exist. The illusion of artificial intelligence would be complete, as long as the lips move just so and the script makes us believe.

How about yourself? Would you improve your own image to the world by creating an avatar that looks like Brad Pitt or Megan Fox? If you go out at night, will it mean typing at a computer while you send a 3-D perfection of species out to mingle in a better, photorealistic Second Life?

The standards for morality will slide in such a future, where our presentation to the world and actions are projected digital ghosts, not our own flesh and blood. Work, advertising, social gatherings, love and war could shift from Earth to the Matrix. It’s all a natural progression from our current use of film and video, which now requires careful costumes, lighting and staging, to a real-time artificial projection. Instead of acting out roles in a movie, each human will simply boot up their animated avatar and leap into the fray. Fake reality, here’s looking at you.

Google launches future search


Say you’re a wine distributor looking to enter the Spanish market. You could conduct research studies of consumer interest, or pour over industry sales stats, or try to peer into competitor advertising plans.

Or you could just punch up Google.

Google has launched Insights for Search, which attempts to use historical data from millions of consumer searches to predict what people will want tomorrow. The service helps marketers choose advertising messages, predict seasonality in demand, look at geographic variances in interest (say, which areas of Spain want which wines), and even scrutinize competitor brand positioning.

Can Google search engines keep up with search?

Why would Google migrate from being a cash-generating ad channel to a complex research tool for marketing executives and advertising agencies? It’s likely a defensive move to shore up Google search demand. The world of search is changing rapidly; Twitter allows real-time search of consumer conversations; Radian6, SocialSense and PeopleBrowsr help marketers monitor broad networks of social media; YouTube is becoming an enormous search portal filtering the equivalent of 86,000 full-length movies uploaded every week; the Google Book Search project can search the full text of 10 million books. And big hurdles remain, particularly how to filter queries for video, the fastest-growing form of online content which typically doesn’t have searchable text or tags, or mobile, with 4 billion phones in the world filling up with apps that give consumers other ways to get online than through the Google front door.

Sergey Brin wrote in Google’s last annual report, “Perfect search requires human-level artificial intelligence, which many of us believe is still quite distant. However, I think it will soon be possible to have a search engine that ‘understands’ more of the queries and documents than we do today.” Predicting the future is one step. With Internet access becoming as fragmented and commonplace as wall electrical outlets, we wonder what the future holds for Google search.

Image: Sebastien B.

Wolfram|Alpha’s AI experiment


Why is the sky blue? We ran this and a few other questions by the new search engine Wolfram|Alpha today. Wolfram|Alpha plays around with 10+ trillion pieces of data to make knowledge computational — if this, then that; this vs. that; if this occurs, what happens then. You know, what are the relationships between things. It’s a valiant attempt at artificial intelligence, and fills a void between Google’s vast search of static items and social media search of chat in real time.

Alas, this alpha thing feels more like beta. Wolfram|Alpha fails to answer basic queries and is still a babe in the woods of intelligence. It reminds us just a bit of Chris McKinstry’s effort to build a vast artificial consciousness that could answer simple yes/no questions. McKinstry never got close to passing the Turing test, and eventually committed suicide.

Designing intelligence isn’t easy. We look forward to seeing where Wolfram goes.

Wolfram|Alpha demo here
. Image: Gari Baldi.

The evolutionary reason why you often fail


Bees die when they sting. Yet they sting with some instinct to protect other bees. Why is it individuals act so crazy sometimes, yet societies survive? Max Zeledon recently noted at his blog that traders in financial markets may have an evolutionary incentive to take wild risks, a conjecture known as the Adaptive Markets Hypothesis, where the many absorb the chaos of the few.

So we responded:

Seed Magazine just ran a great bit called “The Hive Mind,” exploring the evolutionary implications of altruism. In essence, the question was why do some little insects sacrifice themselves, if that removes their genes from future generations even if it helps the group. The answer — which is cool — is that evolution works on several levels, genetic, individual, collective, and for a species.

If you play this forward at the collective level it means certain irrational, self-destructive behavior may kill the individual but actually help the overall group succeed — and thus the group thrives in evolution. Altruism means falling on your sword; nutty market bets mean losing your shirt; but if enough individuals take enough crazy risks so that the overall market prospers, the collective group wins.

So perhaps overconfidence and big bets help markets succeed in the long run, even if individuals or banks fail in the short term. The irony of all this is that altruism and greed are on opposites of the moral spectrum, yet both irrational behavior sets may be just what humans need to survive and thrive.

Perhaps collective groups of humans have formed a new artificial intelligence. AI has arrived and it’s not in a computer on a spaceship bound for Jupiter, or in angry robots that look like Terminators, but instead in the financial markets holding our 401ks. It’s certainly possible. After all, we contribute irrationally to the investment hive mind ourselves, and our altruism got burned last year in the S&P 500.

Image: Da100fotos

‘Heavy Rain’: 2009 may be year when AI looks real

“Heavy Rain” is getting closer. This computer video game, scheduled for release in 2009, is developed for PlayStation 3 by French studio Quantic Dream and moves the technology of 3-D human rendering forward to include flowing hair, tears, wrinkles, and the type of twitching, blinking, pupil-dilating eyes only seen in people in reality.

If it works as planned, the game may be the first to overcome The Uncanny Effect — that slightly creepy feeling you get watching modern animation that still isn’t quite right, you know, Tom Hanks as the dead-eyed conductor in 2004’s The Polar Express. This unnerving effect was conceived by German psychologist Ernst Jentsch in 1906: artificial bodies, he said, that approach realism look even worse, like eerie dolls at Grandma’s house that are almost-but-not alive and therefore seem possessed.

Heavy Rain also poses some questions:

Artificial intelligence: When artificial human faces become totally believable, will we perceive artificial intelligence even if it does not yet exist? It’s one thing to set up computer simulations that act like intelligent responses; but if the face presenting it seems human, the mind behind it may suddenly seem real, too.

Dual standards for morality: What happens to the morals of society when our avatars, or self-drawn images that we present online, look real but still take actions that real society would condemn? It’s one thing to play an online video game where you shoot cartoon characters; when the game becomes total immersion in reality, are we then committing real murder?

A second economy in which all rules, including advertising, change: Virtual worlds have come and gone, but in each advertisers have failed to make an entrance (See: Second Life). When the virtual becomes so beautiful that it transcends our own world, the temptation to move our minds there will be huge. The early forays into virtual communication (online war games, social media communications) show that advertising from the “real world” is often unwelcome.

Put them together, and the appearance of reality in new worlds may make fiction seem real, causing seismic shifts in the morality of what we believe, the values in how we act, and the tools we use to build or exchange wealth. It all goes on sale in a few months on your Sony PS3.

If group intelligence works, why do we get Rick Astley?

Spend any time on the internet and you may get Rickrolled, a little trick where you click on a link that you think will take you to something meaningful and instead get a cloying 1987 pop music video. This silliness is a form of internet meme, or an idea that spreads rapidly from person to person with no clear motive or source.

We began wondering how such group consciousness takes off after being Rickrolled on the consulting blog of Mark Pesce, a genius who has forecasted changes in human intelligence resulting from our internetworked behavior. (In a clever twist, above, Mark fakes with Rick Astley, then shows photos of ALL the people he is connected to online, including us!) Mark notes that 11 million years of evolution have given humans big brains, but the “software” that allows us to use our tools typically lags generations behind the “hardware.” For instance, we had the intellect to develop civilizations for thousands of generations … so why are roads and public water and printing presses such recent inventions?

Mark makes this point because today we are witnessing a new form of tools, a hyperconnectivity that puts cell phone networks in the hands of more than 3 billion people. Each day the business and tech press writes about “breakthroughs” in technology, marginal increases in tool use such as Facebook or the iPhone or Twitter, but the real innovation of our new networks is yet to be discovered — and may take generations.

The trippiest idea may be that vast groups of people merge into a higher form of collective intelligence, a hive mind that can predict future outcomes. We riffed on this a bit today in BusinessWeek, and think it has applications for both human governance and — at a tactical level — predicting marketing outcomes. Companies such as Google and Yahoo have run prediction markets to ask employees to place bets on which new products will succeed. Microsoft has asked its employees to bet when software will be ready for testing, or even when bugs will be found and fixed.

Prediction markets work because groups of people, when they bet for personal gain, make tiny judgments that average out to clear foresight about the future. Ask 100 people how many marbles are in a jar, and the average of all guesses will be close to reality. Collective bets could predict future scenarios. Will the bailout bill fix the economy? Will your next marketing initiative succeed or fail? Rather than using focus groups or executive judgment, the best source may be to ask an entire marketplace. You’ll either get a new form of brilliant intelligence, or a link to a silly Rick Astley video.

Prediction markets do have a major logic flaw, as noted by our Twitter colleague Bud Gibson. If such markets really can predict the future, observers will note this … and then try to change the future itself. If McCain’s camp sees Obama’s odds of success are pulling ahead, does this mean McCain is more likely to make aggressive moves to try to regain his momentum? The most interesting thing about future predictions is that if we can really do it, we might not like the future we see.

If you are interested in prediction markets, here are a few worth exploring:

Hollywood Stock Exchange — which movies will succeed at the box office?
Iowa Electronic Markets — real-money futures on who will win the U.S. presidency. Don’t miss the latest McCain vs. Obama “winner take all” prediction graph or detailed price chart.
Intrade — predictions on elections, current events, science and technology.
The Popular Science Predictions Exchange — bets on when technology gadgets will do this or that.
The now-defunct Policy Analysis Market, an attempt by the U.S. government to use group market intelligence to predict when bad things might happen around the world. The concept was quashed following controversy over allowing people to bet on when terrorism might strike or world leaders would be assassinated.

When artificial intelligence arrives, you may not see it

Here’s a secret. When you search Google, causing Larry Page and Sergey Brin’s massive computers to index the entire internet and bring you, say, gourmet cookie recipes, Larry and Sergey are having a little fun with those computer systems’ downtime. They are running experiments in artificial intelligence. More than a year ago Page told the American Association for the Advancement of Science that he doesn’t think A.I. is that far off, since our own human DNA programming is only 600 megabytes, “smaller than any modern operating system.”

We think A.I. will surprise us all when it comes — because we may not recognize it.

The trouble with artificial intelligence is it won’t look anything like us. The future holds two alternatives: computational intelligence that will lack the human senses of vision, touch, taste and smell, and thus see the world very differently than us; or intelligent systems that may contain you and us as parts of its vast neural network, and be so big we may not notice them.

You see, humans have a tendency to anthropomorphize everything … that is, throw human characteristics on our beliefs, gods, monsters, past and future. Ancient Egyptians thought their sacred statues had intelligence. Christians, Jews, Muslims and Buddhists all give divine beings a human form. The first true modern story of artificial intelligence, Mary Shelley’s Frankenstein, asked whether an artificial mind could also feel. Recently science fiction has spun A.I. as humans gone bad: the onboard computer HAL in 2001 which sounds nice but goes nuts, or Arnold Schwarzenegger in The Terminator, a bigger, angry human machine ready to kill.

But real A.I. will be truly different. Most of what makes us human is our animalistic urges. People, while intelligent, are obsessed with food (you must eat every day to survive), defecation (ditto), sex (be honest), and greed (the survival instinct to hoard pelts, fight for promotion or shop at the mall). The very fact that our most intelligent leaders disagree so passionately about simple issues such as education, healthcare, military, energy and the environment proves that the hormones and passions running through our blood skew our thinking. If we are honest, we aren’t that intelligent at all; our animal urges push us to fuss and fight, mate and steal, plundering the group resources of the planet for our own individual survival.

A.I. in a computer system would not act like humans because it would not have flesh, blood, hunger or hormones. Self-awareness, the ability to ask questions, and the impulse for survival might exist in that system. But the view of the world that humans have — visual images in our minds of other people, landscapes, and the abstractions that arise from them such as mathematics or money — would be entirely unique. Such a system might be able to control resources a la The Matrix, but the direction of its logic would be alien.

The second alternative is A.I. as a system, the one that you are already a part of. Think of an army of ants marching through the jungle, or bees sending out scouts to search for pollen, and from our macro view 6 feet in the air we can see that such systems have group intelligence. Well, why not humans? Don’t we act like ants on our own roads? We are connecting continents, raising cities, emitting carbon, terraforming the planet. The system that impels you to drive to work every Monday is moving in a vast direction, remaking the skin of the Earth. Maybe Pangaea is a vast intelligence and we are the top worker bees, spreading pollen for some unknown cause.

Artificial intelligence may never come. The human brain has 100 trillion synapses that could be connected in so many different ways that the actual computational possibilities in your mind are hyperastronomical — meaning your brain has more connections than there are stars in the universe. Replicating that in computer systems or even vast human networks will not be easy.

Still, we see glimpses of group intelligence in the markets that buy and sell stocks or currency, the macro swings of our economy that drive resource consumption. Half the Earth’s population is now connected by mobile phones; we don’t know what will happen from communication network effects. Perhaps A.I. is already here, living on Wall Street or in cell phone towers. If so, we’re certain it has a headache.

Google and NYT knock out software by watching you


Last fall we came across Google’s image labeler, a little game that invites you to race a stranger (somewhere out there) in tagging photos with titles that make sense. We got served Drew Barrymore and typed “knockout.”

Turns out Google and other companies are using your personal down time to improve how computers recognize photos, video and scanned text. Humans are better than computers at image recognition; but if millions of humans say an image is X, the computer begins to get it too. In the cleverest move, the twisted-word Captcha codes you type to gain access to Twitter or Facebook are being monitored by The New York Times to improve computer recognition of printed words … in essence, using 10 seconds of your brain to refine software that will scan back issues of NYT from 1851 to 1980.


Both ideas, the image play and “ReCaptcha,” are brainchilds of Luis von Ahn — a Carnegie Mellon guru who created the fuzzy password tests for Yahoo in 2000, and is expanding to use downtime to solve problems of artificial intelligence. The average U.S. consumer spends 1.1 hours a day on electronic games and 1.7 hours using email, all input-heavy interactions that could conceivably be leveraged for broader computing tasks.

Just think of what he’ll do with the 17 minutes you spend in the bathroom.

Tip from Brad Ward.