Preface
1: Past Developments and Present Capabilities
2: Roads to Superintelligence
3: Forms of Superintelligence
4: Singularity Dynamics
5: Decisive Strategic Advantage
6: Intellectual Superpowers
7: The Superintelligent Will
8: Is the Default Outcome Doom?
9: The Control Problem
10: Oracles, Genies, Sovereigns, Tools
11: Multipolar Scenarios
12: Acquiring Values
13: Design Choices
14: The Strategic Picture
15: Nut-Cutting Time
Afterword
Nick Bostrom is Professor in the Faculty of Philosophy at Oxford
University and founding Director of the Strategic Artificial
Intelligence Research Centre and of the Programme on the Impacts of
Future Technology within the Oxford Martin School. He is the author
of some 200 publications, including Anthropic Bias (Routledge,
2002), Global Catastrophic Risks (ed., OUP, 2008), and Human
Enhancement (ed., OUP, 2009). He previously taught at Yale, and he
was a
Postdoctoral Fellow of the British Academy. Bostrom has a
background in physics, computational neuroscience, and mathematical
logic as well as philosophy.
`[A] magnificent conception ... it ought to be required reading on
all philosophy undergraduate courses, by anyone attempting to build
AIs and by physicists who think there is no point to
philosophy.'
Brian Clegg, Popular Science
Ask a Question About this Product More... |