Will superintelligent machines destroy humanity? The pitfalls of artificial intelligence.

AuthorBailey, Ronald

In Frank Herbert's Dane books, humanity has long banned the creation of "thinking machines." Ten thousand years earlier, their ancestors destroyed all such computers in a movement called the Buderian Jihad, because they felt the machines controlled them. The penalty for violating the Orange Catholic Bible's commandment "Thou shalt not make a machine in the likeness of a human mind" is immediate death.

Should humanity sanction the creation of intelligent machines? That's the pressing issue at the heart of Oxford philosopher Nick Bostrom's fascinating new book, Superintelligence: Paths, Dangers, Strategies (Oxford University Press). Bostrom cogently argues that the prospect of superintelligent machines is "the most important and most daunting challenge humanity has ever faced." If we fail to meet this challenge, he concludes, malevolent or indifferent artificial intelligence (A.I.) will likely destroy us all.

Since the invention of the electronic computer in the mid-20th century, theorists have speculated about how to make a machine as intelligent as a human being. In 1950, for example, the computing pioneer Alan Turing suggested creating a machine simulating a child's mind that could be educated to adult-level intelligence. In 1965, the mathematician I.J. Good observed that technology arises from the application of intelligence. When intelligence applies technology to improving intelligence, he argued, the result would be a positive feedback loop--an intelligence explosion--in which self-improving intelligence would bootstrap its way to superintelligence. He concluded that "the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control." How to maintain that control is the issue Bostrom tackles.

About 10 percent of A.I. researchers believe the first machine with human-level intelligence will arrive in the next 10 years. Nearly all think it will be accomplished by century's end. Since the new A.I. will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds. The resulting entity, Bostrom asserts, will be "smart in the sense that an average human being is smart compared with a beetle or a worm." At computer processing speeds a million-fold faster than human brains, Machine Intelligence Research Institute maven Eliezer Yudkowsky notes, an A.I. could do a year's...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT