Nick Bostrom 2014. Superintelligence. Paths, Dangers, Strategies. Oxford: Oxford University Press. 328 pp.


An ambitious book about a topic normally reserved for science fiction writers: Is it plausible that artificial intelligence (AI) could one day exceed human capabilities? What would be the consequences? What should we do about it? No doubt many would consider such questions should remain within the realm of science fiction. Not so long ago similar voices no doubt would have said the same about nuclear weapons, satellites, antibiotics, transplant surgery, humans walking the surface of the moon, the internet (the what?), smartphones.

Clearly these are serious questions worth asking (a view shared by the likes of Bill Gates, Max Tegmark, Martin Rees, Elon Musk). Bostrom’s method is to think hard about the likely steps along the way, make cogent arguments about the issues and reach the most plausible conclusions. The result is a book where each of these steps forms a (usually) short, easily digested chapter, notwithstanding the doltish comment from Tom Chivers at The Telegraph who classifies himself thus by asserting the book to be “a damned hard read”. It isn’t, although the final chapters do become longer as more complex material is discussed. However, skimming over some of the detail could be done without missing the main message. For the reader in a hurrry, this strategy is assisted by the very short overview of the content to come at the startof each chapter. Summary tables of key points within many chapters are helpful also.

Bostrom has provided extensive endnotes and an impressively eclectic bibliography, so much so that nearly every page threatens a potentially long diversion into multiple parallel reading threads. It is no criticism of the preceding text to say that the bibliography is almost the best part of the book.

Chapter 1 is a short, thorough and very readable survey of the technical history up to now of AI development. Chapter 2, similarly charts likely future paths. Subsequent chapters discuss different forms that superintelligent AI might take (speed superintelligence, collective superintelligence, quality superintelligence) then proceed to discuss the timing and speed of a future intelligence explosion. These early chapters gradually accumulate the impression that although clearly deeply considered, nevertheless what is presented here are opinions, some better supported than others.

Chapter 6 contemplates likely abilities that a superintelligent AI might possess. Chapter 7 speculates on what goals a a superintelligent AI might have. Would the outcome of takeover by a superintelligent AI inevitably be doom for humans (chapter 8)? How could we control it (chapter 9)? If several superintelligent AIs were to be created simultaneously, what would likely happen (chapter 11)? And so on.

Chapter 14 asks the reader to think about the strategic directions — what long-term science, technology policy decisions should we take, and how to apply these questions to the issue of superintelligent AI? For example, is it feasible to limit or prevent development of a dangerous technology by witholding research funds? Is it safer for several groups or nations to develop AI collaboratively? (Yes.) Chapter 15 more briefly and bluntly asks “What is to be done?”. At risk of ruining the suspense, Bostrom’s answer: “Will the best of human nature please stand up”.

If one were to strive to be critical, most chapters don’t arrive at a clear conclusion. But then, could anyone, on questions such as these? At least Bostrom is honest about it. And he provides a valuable counterpoint to more rosy view of an AI-led future, such as Ray Kurzweil’s How to create a mind.