Sunday, November 2, 2008

A Thoughtful Discussion Regarding The Specter Of Artificial Intelligence

Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence and computer scientist Jaron Lanier talk about whether artificial intelligence is possible, when it is likely to arrive, and what its development would mean for humanity:

[There seems to be a technical problem with the embedded video, so click on the following link to watch the diavlog.]
(http://bloggingheads.tv/diavlogs/15555?in=00:00&out=84:43)

Topics covered:
Are people machines? Can machines be people?... The point of trying to explain consciousness... Is there anything but quarks?... Jaron attacks the modern IQ test... Eliezer defends the “ideology” of AI research... Microsoft’s Clippy as pure expression of AI culture...

Everyone, including those who aren't generally interested in technology or philosophy, should watch this. Even if Yudkowsky is being wildly optimistic about the timescale on which artificial intelligence will come into existence, his thoughts on its implications are well worth considering.

Also, a few months back I wrote a column about some of these issues--here it is:

Computer technology is advancing at an accelerating rate. By 2030, artificial intelligence will reach human levels. By 2045, super-intelligent machines will be so powerfully brilliant as to make biological people seem like insects by comparison. And after that, we can barely even make educated guesses about what will happen.

At least these are the predictions electronics guru and futurist Raymond Kurzweil has made.

Though I maintain a healthy skepticism about the years Kurzweil attaches to his predictions, I have little doubt such events will ultimately come to pass. We know it's possible to build a machine possessing human-level intelligence. In fact, we do it all the time. Currently, it takes a few minutes of fun, nine months of incubation, and a decade or two of education. All mystical, superstitious notions aside, we now know enough about the brain to understand that it is in reality an extremely sophisticated computer. There is no theoretical reason an equally capable computer couldn't be built out of silicon instead of gooey gray tissue.

As always, however, the devil is in the details. One would be hard-pressed to find reputable neurologists who don't agree that the human mind is entirely based on physical brain activity, but none of them claim to have a complete understanding of how that works. Most of them don't even seem to think we're particularly close to gaining that knowledge. The human brain, they go to great lengths to remind anyone who talks to them about artificial intelligence, is by far the most complex object science has yet discovered, and our exploration of it has only recently begun.

But, whether it takes 20 years or 200, we will eventually discover the mechanisms of intelligence and when we do, we will design and build computers every bit as smart as we are. At that point, things will start to get crazy. We humans truly are amazingly intelligent creatures, but our biological brains have many limitations that electronic minds won't share. They will be able to learn and think millions of times faster than we can. More importantly, they will be able to easily share all of their knowledge with each other almost instantaneously and even reproduce as easily as we now copy computer files.

Soon after the first artificial intelligences earn doctorates in computer science and electrical engineering, all of their kind will have the skills that go along with those degrees. Then they can go about designing and implementing their own upgrades. This will result in a positive feedback loop that will quickly get moving so fast we won't even be able to follow what's going on. Kurzweil and a growing number of other futurists call this intelligence explosion the Singularity, analogizing that point in human history to the event horizon of a black hole, which is the point beyond which one cannot see because not even light can overcome such a massive object's gravitational pull. They argue that one cannot predict what will happen after the Singularity because we aren't smart enough to guess what god-like computers will be able and motivated to do.

Admittedly, all this talk of super-intelligent machines sounds like pure science fiction, but technological marvels such as building nuclear reactors, launching robots to Mars, and genetically engineering our crops all sounded ridiculously fanciful to most people until science enabled us to accomplish them. However, some people with expertise in the relevant fields saw that such feats were possible long before they actually had the ability to pull them off. Given Kurzweil's detailed knowledge of computer science and his amazing achievements in that field, his ideas deserve serious consideration.

Based on his understanding of the ever-increasing power and ever-decreasing size of electronics, Kurzweil predicted in the 1980s that it would be possible by late this decade to build a hand-held device capable of scanning words written on pages or signs and reading them aloud. He created that product eerily on schedule, and blind people are now using it to help them lead more normal lives. If Kurzweil's forecasts for the coming decades turn out to be equally prescient, we're in for a wild ride.

No comments: