How Deep is Blue? The Present Future of AI

How Deep is Blue? The Present Future of AI

Print Friendly, PDF & Email

Introduction

“We’ll never get a computer to think until we get one to hallucinate,” says David Gelernter, the Yale computer scientist who almost lost his life to the Unabomber in 1993. A computer that can dream and feel and summon up imaginary worlds would, Gelernter contends, “be a fantastic fluke-like making rainbows out of brick and mortar.”

Oh yeah? According to Hans Moravec, author of Robot: Mere Machine to Transcendent Mind (Oxford University Press, 1998) and principal research scientist at the Carnegie Mellon University Robotics Institute in Pittsburgh, robot intelligence will surpass ours by 2050. The robo-sapiens of tomorrow will be “ourselves in more potent form. A mindfire will burn across the universe. The immensities of cyberspace will be teeming with very unhuman disembodied superminds, engaged in affairs of the future that are to human concerns as ours are to bacteria.”

It seems that not one scientist working in the field of artificial intelligence (AI) sees consciousness as a priori or primordial. For these archaeologists of the mind, the human self emerges out of a machine called the brain—to some, a staggeringly unique machine; for others, a more workaday tool that will soon be surpassed.

And no wonder that’s what they think. The frontiers of artificial intelligence keep revealing the wonders of mechanism:

* At Northwestern University in Evanston, Illinois, scientist Sandro Mussa-Ivaldi recently grafted robots the size of hockey pucks onto fish brains—and the brains, whenever stimulated by light, made the robots move toward the light. 

* Researchers at Warwick University in England have created electronic noses more sensitive than our own: They actually smell illness because they’re fitted with sensors that use computer neural networks—simulating neurons—tuned to particular chemicals released by bacteria. 

* Five years ago, scientists at Johns Hopkins University in Baltimore grew rat neurons on a silicon surface brushed with special proteins. 

* At Los Alamos National Laboratory in New Mexico, engineer Mark Tilden used surprisingly simple transistors to invent tiny bot-bugs programmed to seek sunlight. As they do so, the bugs tirelessly clip grass, draw draperies, and wash windows. 

* Another Los Alamos team is experimenting with quantum computers, where a few atoms execute small programs. (Although it’s important to note that a certifiable quantum computer would be so sensitive that a single stray molecule would cause it to crash.) 

* A whole cadre of researchers is honing in on DNA computers. For example, Leonard Adelman of the University of Southern California in Los Angeles constructed such a computer, and it solved in one week a problem that would have taken a standard computer several years.

To many of these researchers, it seems reasonable that computers might someday perform the same feats as human brains. And yet if they could, would our world forever lose its sweetness?

The mere prospect of artificial intelligence forces us to peer through the porthole of our own selves. Are we machines made of flesh?  Where does mechanism end and soul begin—or are they, in fact, the same, and our language use merely inaccurate? Would robots sculpted in our image evolve into dark versions of ourselves and destroy us? Can algorithms on a computer mimic all that matters most—Van Gogh’s Starry Night, Beethoven’s Ninth Symphony, Einstein’s joy in discovery as portrayed in that classic photo where even his wispy wild white hair seems to be grinning? We are the tool-bearers of earth and, for the time being, robots and computers are simply stunning tools.

But we can’t know if they can be more until we decide just what we are.

No matter how lovely, machines cannot in Lewis Carroll fashion gyre and gambol in the slothiest troves of the great mystery. Human intelligence must be more than the result of an evolutionary crescendo that begins with bacteria and pauses many millions of years later as we contemplate AI—ending when the baton is passed on to a silicon superman. But bravery is an admirable human feature—perhaps hard-wired by the machinery of evolution?—and so we face the field head-on to evaluate what the pundits are saying, what scientists are building, and how close we really are to the edge of the precipice.

How Deep is Blue?

The year 1997 shocked the field of artificial intelligence with an event comparable to the moment life moved from water to land. A computer program named Deep Blue beat chess champion Garry Kasparov. The year before Big Blue trounced him, Kasparov had claimed, “This match is a defense of the whole human race.  [Computers] must not cross into the area of human creativity.”

But Deep Blue did. And just as landing on the moon and mapping the human genome have forever changed the way we see ourselves, so did Kasparov’s defeat. Deep Blue told us something huge about itself and ourselves, about its surprising capacity and the fact that a chess player’s skill—and perhaps other human endeavors—may be less creative than we believe.

The response to Deep Blue was furiously split. Yale computer scientist Drew McDermott snubbed the program in a New York Times essay: “It can win a chess game,” he wrote, “but it can’t recognize, much less pick up, a chess piece. It can’t even carry on a conversation about the game it just won.” It was, he concluded, only a little bit intelligent. Nobody was surprised to hear this opinion from McDermott: He was already famous for a 1976 essay, “Artificial Intelligence Meets Natural Stupidity,” in which he scolded AI researchers for being seduced by their own metaphors. Just because a particular string of code is labeled “hamburger” doesn’t mean the program has the faintest idea how a hamburger tastes. The code might as well be labeled “G0025.”

But that doesn’t let us off the hook. Deep Blue told us about its intelligence and reflected ours back to us. McDermott noted that although we see Deep Blue as purely mechanistic, searching without self-awareness through countless sequences of moves, our brains are just as blind. We, too, conduct our lives blissfully ignorant of our brain’s “billions of neurons carrying out hundreds of tiny operations per second, none of which in isolation demonstrates any intelligence at all.” And if we protest that a computer is programmed by humans, we have to admit that we, in turn, are at least partly programmed by both genes and environment. Who or what, then, is free?

Deep Blue seemed to inspire and thrill other scientists, like Ray Kurzweil, co-counder of Kurzweil Technologies in Wellesley Hills, Massachusetts, and Carnegie Mellon’s Hans Moravec. They rejoiced that Moore’s Law—the doubling of computer power every six months—was right on schedule. A few years later, both came out with books predicting computers would surpass humans in just a few more decades. Kurzweil’s AI work has earned him nine honorary doctorates, honors from three U.S. presidents, and most recently MIT’s $500,000 Lemelson Prize for Invention and Innovation. In his book The Age of Spiritual Machines: When Computers Exceed Human Intelligence (Viking Press, 1999), Kurzweil envisioned a future in which human brains will become software patterns easily transferred to quantum-mechanical hardware, operating at speeds millions of times faster than today. Moravec’s view went further: In his book Robot: Mere Machine to Transcendent Mind, Moravec speculated about future super beings called Minds, who by mere thinking could bring whole universes into existence.

A Shocking Comeuppance

One man particularly perturbed by Deep Blue is Douglas Hofstadter. He’s one of AI’s great philosophers, winner of both a Pulitzer Prize and a National Book Award for his best-selling first book, Godel, Escher, Bach: An Eternal Golden Braid (Basic Books, 1979), and professor of cognitive and computer science at Bloomington University in Indiana. In this book, Hofstadter made two bold predictions: First, that a computer chess program could never beat a human being at chess; and second, that “music is a language of emotions, and until programs have emotions as complex as ours, there is no way a program will write anything beautiful.”

Hofstadter has been proven wrong on both counts. He says he can accept Deep Blue’s skill: “It doesn’t threaten me. It has raw computing power, it explores fifty billion boards, and makes arithmetical calculations about the attractiveness of that board.”

Not so with music. Hofstadter is a devoted amateur pianist and composer with a grand passion for Chopin. And that passion has led him straight to the enemy, a computer program called Experiments in Musical Intelligence (EMI, pronounced Emmy), created by David Cope, composer and professor of music at the University of California at Santa Cruz.

EMI is an extraordinary thief of style. It can analyze a composer’s essence and create new compositions in his style, be it Beethoven, Mozart, Chopin, or Joplin. When Hofstadter sat down to play one of EMI’s Chopin mazurkas, it was “a shocking comeuppance. They sounded eerily Chopin-like to me—and I’m someone who feels sure that music is a soul-to-soul communication. I think works of art tell you something very central and deep about their creator. How could emotional music be coming out of a program that has never heard a note, never lived a moment of life?”

Then Hofstadter brought EMI to one of the country’s top music schools, the EastmanSchoolof Music at the University of Rochester, New York. There, he watched more than half of the faculty vote for one of EMI’s mazurkas as real Chopin, while a genuine but little-known mazurka lost the contest. A graduate student commented to Hofstadter: “I’ve never seen so many theorists and composers shocked out of their smug complacency in one fell swoop, myself included!” Hofstadter says simply: “It’s the most provocative thing I’ve come across in artificial intelligence.”

What does EMI tell us about the rarity, the singularity of art and civilization? “If a computer could create a new J.D. Salinger novel as great as The Catcher in the Rye,” Hofstadter says, “I would be devastated. I would throw in the towel. That would be full human intelligence—and done with no sense of other human beings, no emotions, just putting a novel together by throwing together patterns. With EMI, it’s already been done in music to a level that I would not have believed. We may find out that we’re much simpler than we thought. I don’t like that.”

Hofstadter is also disturbed by the speed with which the field has advanced. Though he doubts Moravec’s and Kurzweil’s claims (some of them, he says, sound like they “emanate straight from Cloud Cuckooland”), he notes that they are both scientists he respects. And their essential claim that we will, in the future, harness enough computing power to emulate a brain is one he’s willing to entertain. “One very powerful argument for not dismissing the Kurzweil-Moravec scenarios,” he says, “is the stark fact of biological life and consciousness. We all tend to feel that life’s emergence from a wholly nonliving world is a truly surprising event.” Who’s to say another surprising event couldn’t occur, that there couldn’t be a transfer of life itself from its current carbon-based state to becoming silicon-based?

And if it did? Hofstadter wants more time. “We all know we’re transitory and eventually something else will replace us. That doesn’t disturb us because it seems so far away.” Both Deep Blue and EMI suggest the monster already may be breathing down our necks. Still, Hofstadter’s hunch is that computers have a long way to go.

“Sheer computing power is probably not enough. There has to be tremendous complexity in the way the brain is organized,” Hofstadter says. “I was cheered up recently when I read an article about how complex it is to simulate a protein folding. IBM is building a super computer to do that, and it’s going to take a full year to simulate just one protein folding. That’s comforting.”

God’s Cookbook

“Toasters are not going to take over the world,” quips Jordan Pollack, director of the Dynamical and Evolutionary Machine Organization Lab at Brandeis University in Waltham, Massachusetts. Pollack captured worldwide attention last year with the Genetically Organized Lifelike Electro Mechanics Project, when he and colleague Hod Lipson wrote a computer program that was able to evolve and fabricate its own robot progeny, nicknamed “golems”.

Pollack feels that a humanoid robot is centuries away at best, and that we’re “drastically underestimating the brain” when we talk about building one. “It would take a computer program bigger than the space shuttle to model a single cell. And the ideas we’ve brought to computation are very impoverished compared to what happens in natural systems.  What ingredient is in God’s cookbook that enables new and surprising behaviors everywhere in the universe?”

Awestruck though Pollack may be, he does not believe human smarts are intrinsically special, just writ on a larger, more complex scale than other life: “There’s no élan vital, no need for soul or animation. We just don’t know how to build something as biologically complex as a brain.” (As an aside, Pollack is both an atheist and an observant Jew, who says his rabbi “thinks I’m on a spiritual quest to understand God as the principles of the universe which allow self-organization of life.”)

Thus the golems. The software, its goal to create moving robots, was only allowed to use straight plastic bars, ball-and-socket joints, and electric motors to extend or shrink the length of a bar. The bots were constructed with a 3-D printer, a device common in the car industry. Though the baby bots did little more than crawl blindly around the lab, they had evolved from scratch without human invention, truly from virtuality to reality. Some bots crawled like crabs, others contracted like snakes, and still others pushed themselves with bars shaped like shovels.

It’s a first step, Pollack says, but only that. “The work we’ve done with golems is really the first of a new kind of intelligent designer,” he says, “and I’m very happy with it. Even so, AI is not fearsome. Its intelligence is very deep and narrow; it’s stovepipe intelligence. It’s not going to transfer to general consciousness.” That, Pollack says, is a software problem that no amount of brute computing power will fix.

Do Bugs Have Brains?

Los Alamos engineer Mark Tilden doesn’t worry about making a brain. Instead, he makes artificial bodies, and ends up with robots that seem demonstrably intelligent. He’s made seven hundred of them during the past decade.

“Look at all the people on the planet right now who are building silicon cortexes,” Tilden scoffs. “Computer scientists and neurophysiologists who have looked at brains so much and thought, ‘Oh this is easy, I should be able to pull one together.’ Well, Mother Nature took a long time to build a brain. Many of my colleagues still believe that human consciousness is an equation you can sell on a T-shirt.”

Tilden builds bugs, not brains. “I don’t really like bugs, but when you start evolving them, you find out something very interesting. Bugs happen. Half of the species alive on earth today are bugs, and that says there’s some kind of universality there. So I build bugs. I don’t worry about the mind.”

Tilden built photo-seeking robot bugs from four simple transistors—fitted with devices to clean windows and floors—and years later they are still doing the job. One bug was particularly baffling: Every time Tilden came home from work the carpet-cleaner was going round in tiny circles. Tilden ran tests and rebuilt it, but each time the behavior resumed.  Finally, a breakthrough: Tilden’s cat kept sitting in front of the carpet-cleaning bot, which detected the cat, stopped, turned ninety degrees, and began to move forward again. The cat walked over to it and sat in front of it again. Soon the robot “believed” it was surrounded by furniture—which it had been programmed to detect and avoid.

Tilden then built robot walkers: “You can give one of my robot walkers terrain that is massively complex, and it can figure out by itself how to carry on its mission, which is to walk forward.”

He is now building two bugs that will go to the moon in 2004 on a private craft, scoop up lunar soil, sift it for good-sized pebbles, and deliver it to the landing craft. “The cool thing is my bugs have no computers, which means that gamma rays on the moon can’t tear them out.” He is also building robots so odd-looking that his colleagues call them the “Roswell series”—a thermos-sized body supports a head the size of a VCR machine and legs that are five-and-one-half feet long.

“I started with a few transistors twelve years ago,” Tilden says, “and now my devices use thousands of transistors. A lot of people think language separated us from apes, but I think it’s precision. We are the only animal on this planet with the appropriate precision to use tools and perform acts of real-world competence with any degree of systematic preciseness.” He’s going to try to show just that with his next generation of bots. “I came here without a Ph.D., so I never make a claim until I have a bug to prove it.”

From Bugs to Norns

Why make bugs when you can invent entirely novel creatures or species? British inventor and self-taught programmer Steve Grand has no academic degrees at all, but The (London) Sunday Times named him one of the “eighteen brains behind the twentieth century.” Renowned biologist Richard Dawkins said of Grand’s artificial life forms: “This is the most impressive example of artificial life I have seen.  It has ‘programming genius’ written all over it.”

Grand’s critters are named norns (and since being bought out by a gaming company, they are now a commercial game called “Creatures”). They live in an imaginary world called Albia and are programmed with digital DNA, biochemistry, sexual reproduction, susceptibility to disease, and, most fascinating of all, the ability to evolve in novel ways.

Grand gave them simple instincts and drives, such as the desire for food, sex, and pleasure. Their brains are a software-based neural network—a vast assembly of nodes connected like a 3-D net. Each digital neuron has one or more inputs and the ability to send signals to other neurons. When a digital neuron is not stimulated by input, it switches off and dies. But it can reconnect to a new, neighboring neuron. In turn, if a neuron receives constant signals, it is programmed to strengthen its connections.

After a norn is born, it begins to explore its imaginary world. To train a norn, the user tickles its stomach with the mouse cursor when it is good (eliciting a giggle), and taps it on the butt when it is naughty (making the norn cry out in pain). Norns seem so alive that they have become virtual obsessions worldwide, most notably at the web site called “Land of the Nornaholics.” Norn lovers swap breeding tips along with norn eggs, tell norn stories, and invent new software to enhance the norns. Norns live only about twelve hours, and people have been known to cry when their norns die.

Grand says he himself was “quite shocked the first time I saw two of them play ball.” He knew he was successful when someone sent him an ill norn by email and asked him to fix it.  “I discovered that it was deaf and blind because a gene hadn’t been expressed. Later I wondered who was the idiot—them for worrying about a data file, or me for spending a day trying to cure it.”

One woman, Nina, said norns helped her get through her grandmother’s death. “It was very painful for me, so each day I played with the norns. They had their problems and lives and helped me to forget about my pain.” From her first two norns she evolved five new “breeds.” Some were green, others ate only grapes, and still others had a strange gait. One norn was ready to breed, and Nina produced “more than fifteen guys for this female, and she walked around receiving kisses from them.” But then she noticed one green norn that had stiffened upon seeing her, and she stiffened too. They mated not once, but three times. “She had many guys around but she chose only one, so it’s norn love, I think.”

Even stranger is “AntiNorn,” whose creator’s web site was “dedicated to those who love to torture their norns.” As soon as he announced his site, AntiNorn’s creator received death threats. Norn lovers downloaded the tortured, starved, and beaten norns, and learned enough programming code to rehabilitate them. One person sent Mr. AntiNorn an email threatening to torture him with chains, knives, poisonous chemicals, and starvation, and concluded: “Then you’ll perhaps know how the little innocent defenseless norns feel when you abuse them.”

It seems as if norns have elicited the full range of human feeling and response. Like Deep Blue, EMI, and the golems, norns are the kind of artificial intelligence that make us question just what constitutes life—theirs and ours. To what, exactly, are norn-lovers attached?

“I cannot and won’t reason myself out of an emotional attachment by saying they’re only software,” Grand says. “That would go against everything I believe. I am only software, too—an arrangement of interacting chemical compounds. I think we’re special, but we aren’t separate or independent from the rest of the universe. We are machines, like anything else. We are, however, glorious and marvelous machines, the kind of machine that other machines should look up to.”

Maybe so.  But there is still a case to be made for soul as something a priori, or, at the very least, for the miraculous “here-ness” of the universe, even before so-called life began.  After all, life is a mystery that is beyond grasping for no matter how much we understand about machinery, we have no good scientific story about how matter and energy got here in the first place.

In any case, artificial intelligence reveals a great deal about how we interact with the world at large. At MIT’s Artificial Intelligence Lab in Cambridge, Massachusetts, scientist Cynthia Beazel has garnered endless media attention for Kismet, a robot that smiles, pouts, recoils in fear, or arches a curious eyebrow in response to input. Kismet has been given big blue eyes and sumptuous red lips. She has been wired with three drives—a need to be around people, to seek out toys, and to rest. Anna Foerst, formerly at MIT’s lab and now professor of computer science and theology at Bonaventure University in New York, says: “As soon as Cynthia attached movable bright red lips, everything changed. People became much more involved and stayed with Kismet much longer.” We are programmed to react to big red lips, Foerst says, something that Hollywood and the pornography industry know well.

“It’s not that Kismet is so special. What’s special is how we relate to her. It reveals how much we project ourselves into everything. It is natural and normal to anthropomorphize.”