All of this "our tech is so powerful it can end the world" stuff is just marketing buzz. The real threat has always been OpenAI and others keeping these powerful systems with high capital moats, locked up and closed sourced with selective full-access.
Compute isn't vanishing whether they spend all of it or none of it. That's the point of allocation.
Allocation isn't spending no but it says quite a bit. Either way, they will be spending a non trivial amount of money trying to solve this problem quickly.
Correct, this technology is too powerful to be controlled by a private company. It needs to exist solely as a public good. If we're talking about AI regulation, I think the most sensible move would be requiring that all models need to be open source. Capitalist's lack of ability to profit isn't a public concern.
Some would also argue that it was trained on public data and should be public for that reason as well.
If every model needs to be open source then AI companies need to be taxpayer funded otherwise they'll never make a profit. Until then a for profit, gated approach is the only way to build up enough funds for SOTA R&D
The R&D will march forward regardless of profitability, there's already been a ton of innovation in the open source space. You're likely to see less innovation with these companies squatting on their IP, data and hardware moats. Case in point: pre-stable diffusion AI vs post-stable diffusion AI. So much innovation happened as soon as the model was "opened".