When we ask large language models to predict the future of AI, we're not consulting an oracle—we're asking a probabilistic next-word predictor to summarize its training data. Because human writing about AI is heavily dominated by sci-fi tropes of rogue machines, domination, and extinction, the LLM simply mirrors these themes back at us. It's essentially predicting what a human *would write* about the future of AI, not what will actually happen. Treating LLM outputs as literal prophecy is useless; it's just humanity's own anxieties reflected back to us.
Behind the Comic
They are trained on huge swaths of human text, and human writing about advanced AI is dominated by science fiction narratives of rebellion and extinction. The bot is just completing the pattern.
No. They predict the most statistically likely next words based on existing literature. If everyone wrote that ASI would bring us free ice cream, the LLM would predict a future of free ice cream.