Good insight, thanks. I've been struggling with existential dread related to https://ai-2027.com/ - trying to refute the arguments is exhausting, I should probably just ignore it and get on with my life.
Fears about AI misalignment look like a modern, secular Pascal's Wager. It's hedge against a low-probability, infinite-cost outcome. The Rationalist version builds a matrix where one cell is -inf, and from that, derives an imperative to act, regardless of how speculative the premise is.
However, once you admit infinite utilities and epistemic humility about probabilities, you can justify literally anything. The same construction could obligate you to worship simulation overlords, or to develop interstellar missile defense in case of hostile alien AIs. That's not a rational risk model, it's theological reasoning dressed up in Bayesian costume.
This stuff operates on the boundary of plausibility, the intelligent paranoid mind generates scenarios that can't be conclusively ruled out but also can't be grounded in anything falsifiable. That's not where productive engineering lives.
The escape hatch here is the same one we used for Pascal's Wager: you reject the premise. You don't hand the keys of your threat model over to speculative infinities. You work on problems you can see, measure, and influence. You don't respond to every "what if?" with a fire drill. Recognizing that this is a failure mode in human cognition is enough to avoid it.
Yes, there are real risks with AI: bias, misuse, surveillance capitalism, economic displacement. These are all tractable, all human-driven. But the misalignment discourse? It's less a bet against an existential outcome and more a projection of philosophical anxieties onto a technical ___domain. And once you frame it that way, it's clear: this isn't engineering. Folks doing this are playing metaphysics in front of a terminal emulator.
AGI does sound a lot like a "psychological crutch" when it is pointed at as the answer for wicked problems like climate change. With faith in AGI, nothing else really matters because we can eventually surrender ourselves to the superior direction and interventions of this god we have created. There's no point worrying about anything else.
Human technology has evolved significantly in the last few thousand years but we are all the same religious fruitcakes we've always been, whether or not we can see it.
The comparison to Pascal's Wager is apt - I came to the same conclusion when I first read about Roko's Basilisk, but the insight somehow slipped from my mind. It's like an epistemic tarpit attractor.
Existential dread is not unwarranted but you do need to choose the arguments that are most likely to happen, and focus your energies on those rather than be sidetracked by spurious outcomes that may never come to pass.
There are many false narratives meant to distract, and you'll have to discern which ones are true and it can be exhausting if you let it. This is how 5th-Generation-Warfare (5GW) is.
I'll tell you right now, its highly unlikely that AGI will occur, because the properties required for computation make it nigh impossible.
It doesn't need to do this to be an existential threat. All it needs to do is disrupt the economic cycle sufficiently to stall economic activity, and it does so quite easily.
An economic cycle stalling leads to chaotic socio-economic collapse and a loss of order. Current population levels are only made possible by dependencies which have been made brittle over time, relying on continued order for survival just to produce enough food. I'm sure you've heard of the Malthusian Trap, you may not have revisited later works touching on Malthus by Catton in the 90s.
The economic cycle is a circular cycle between workers and producers. We exist today because of an equitable distribution of labor which is compensated.
There is also the issue of money printing, which is too long to discuss here. Needless to say there are two requirements that must be met to continue the cycle sustainably (avoiding chaotic whipsaw and stalling). Producers must make a profit in purchasing power, People must make a profit supporting a wife and three children sufficient so that one makes it to have children themselves, also in purchasing power.
Money printing eventually causes money to lose its store of value (property), and when that happens medium of exchange is lost as well. Stalling of the economy, and same issue.
There are at least 6 other avenues by which this same outcome occurs, and they are in systems we can no longer change, its runaway. Solutions like UBI fail for other equally impossible problems, albeit they are more indirect but still end in chaotic whipsaws, distortions (artificial supply constraint), etc.
AI doesn't need to be AGI. All it needs to do is disrupt and eliminate the majority of entry level positions of the career development pipeline.
Within a 10 year cycle, collapse will become unavoidable as a matter of math. The output of a pipeline depends upon the input of the pipeline, it is always less for all values above 0. 0 in 0 out.
Man ages out of careers and die eventually. No people to mentor, that knowledge dies with them.
That's very interesting. It seems to assume AI agents will have the same evolutionary imperative as other species. But without the ecological context, it looks like an unfounded premise. Why would AI play the 4X game, considering the infinity of alternative imperatives it could follow?
You’re having existential dread over pure science fiction written by people who have something to gain from LLM companies becoming profitable?
You have it right. Just ignore the grifters. These companies are burning billions and are realizing they don’t have an exit strategy so they’re becoming desperate.