Wrong Mirror on the Wall

2026-03-03

When we ask large language models to predict the future of AI, we're not consulting an oracle—we're asking a probabilistic next-word predictor to summarize its training data. Because human writing about AI is heavily dominated by sci-fi tropes of rogue machines, domination, and extinction, the LLM simply mirrors these themes back at us. It's essentially predicting what a human *would write* about the future of AI, not what will actually happen. Treating LLM outputs as literal prophecy is useless; it's just humanity's own anxieties reflected back to us.

Panel 1: A user asks an AI to predict humanity's future. Panel 2: The AI outputs a dramatic extinction scenario, and the user immediately panics. Panel 3: A scientist gives a counterexample: if training text said the future is infinite ice-cream, the model would predict that too. Panel 4: The scientist concludes it's pattern-completion—a mirror of text—not a real forecast.
Asking LLMs about the future of AI just reflects our own sci-fi fears back at us.

Behind the Comic

They are trained on huge swaths of human text, and human writing about advanced AI is dominated by science fiction narratives of rebellion and extinction. The bot is just completing the pattern.

No. They predict the most statistically likely next words based on existing literature. If everyone wrote that ASI would bring us free ice cream, the LLM would predict a future of free ice cream.