"The future is already here — it's just not evenly distributed." — William Gibson

The future was already here in 1950, though Alan Turing couldn't have known how right he would be. Sitting in his Manchester office, cigarette smoke curling around equations that would reshape civilization, Turing posed his deceptively simple question: not whether machines could think, but whether we could tell the difference. The Imitation Game, he called it, and like all the best games, the rules seemed straightforward until you realized the stakes.

Six years later, ten researchers gathered for eight weeks at Dartmouth College, convinced they were about to crack the code of human intelligence itself. They coined the term that would define the next century: artificial intelligence. Their proposal radiated the kind of academic hubris that changes the world: "Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

By 2012, everything changed. AlexNet's victory proved deep learning's potential, launching the modern AI revolution. Neural networks, dismissed for decades as computational curiosities, suddenly became practical. Three factors converged: massive datasets, powerful GPUs, and algorithmic innovations that solved the problem of training deep networks. The result was a revelation: many AI problems could be solved not through careful engineering but through statistical learning from examples.

Then came November 30, 2022—the inflection point historians would mark as the beginning of the AI age. OpenAI released ChatGPT to the public. Within five days, one million people signed up. Within two months, it reached 100 million users—the fastest adoption of any consumer technology in history. For the first time, artificial intelligence became directly accessible to anyone with an internet connection.

What followed was a rollercoaster that would have given any theme park designer vertigo. March 2023: GPT-4 stunned the world by passing the bar exam and medical licensing tests. April 2023: Italy banned ChatGPT, then reversed course. May 2023: Geoffrey Hinton, the "Godfather of AI" who pioneered the deep learning revolution, quit Google to warn humanity about the technology he helped create. "I thought we were 30 to 50 years away from general AI," he said. "Now I think it's 20 years or less." His warnings about AI systems becoming more intelligent than humans sent shockwaves through the industry.

The months that followed whipsawed between breakthrough and backlash. AI-generated art won competitions, sparking artist protests. Students used ChatGPT to write essays, forcing education systems to reimagine assessment. Lawyers were sanctioned for submitting AI-generated legal briefs with fictional cases. Stock prices of AI companies soared and crashed on rumors and revelations. Every week brought new capabilities—and new concerns about jobs, truth, and human relevance.

Companies implementing AI saw dramatic improvements: 40% reductions in customer service costs, 50% faster development cycles, 80% automation of routine processing. But they also confronted difficult questions about employment, decision-making authority, and competitive sustainability.

The promise of human-AI collaboration confronted the reality of competitive markets that rewarded cost reduction over decision quality. Organizations preserving human involvement often found themselves outcompeted by those eliminating human oversight entirely.

Today, the most important developments may not be technological but organizational, economic, and political: How do we structure markets to reward beneficial AI deployment? How do we preserve human agency while embracing AI capabilities? How do we ensure that AI systems serve human values rather than optimizing for metrics that ignore what makes life meaningful?

This is the story of what happens when artificial intelligence becomes not just technologically possible but economically irresistible—and how humanity navigates the gap between what AI can do and what AI should do.

In a small consulting office in San Francisco, Felix Canis was about to discover that some choices have already been made for us—by intelligences that learned to choose for themselves.

This is the story of what happens next.

Keep Reading

No posts found