Reading Library · Systems & Scale Tier 1: Core Influence

Superintelligence: Paths, Dangers, Strategies

by Nick Bostrom (2014)

★★★★ 4/5

A philosophical analysis of existential risks from artificial superintelligence

"We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans."

— Nick Bostrom

My Review

Bostrom's analysis of AI risk is fundamentally about emergence - what happens when systems become more intelligent than their creators. The concepts of instrumental convergence and orthogonality apply to any complex adaptive system.

Why It Matters

Bostrom's analysis of emergence, control problems, and unintended consequences applies beyond AI to any complex system that escapes its original constraints.

Key Ideas

  • Superintelligent AI could emerge through various paths
  • Instrumental convergence: most goals lead to certain sub-goals
  • Orthogonality: intelligence and goals are independent
  • Control problems are harder than capability problems

How It Connects to This Framework

Book 7's emergence concepts and the challenges of controlling complex systems.

Get the Book

Support the author and your preferred bookseller:

Tags

AIriskphilosophyemergencetier-1

Want to go deeper?

The full Biology of Business book explores these concepts in depth with practical frameworks.

Get Notified When Available →