I hope everyone just gives up on AGI and focuses on making tools that do things humans need. Very, very few tasks need autonomous intelligent agents to perform them correctly or well. AGI is just a nebulous dream that fulfills the god complexes of egotist CEOs and 12-year-old boys. Automation does not need intelligence, just reliability. Data synthesis does not need intelligence, just data and the time/equipment to crunch it.
I completely get where you're coming from on this, and agree in many ways, depending on the situation.
Keep in mind, though, that what we're talking about here is a massive shift in the philosophical underpinnings our existence. It's quite possibly the difference between being able to send intelligent 'life' to other stars or not (which from what we know so far, we're the sole keepers of in the universe). It also opens the door to fine tuning our collective sense of ethics, and increasing cooperation on solving long term problems. Inequality included. The stakes couldn't be higher.
Of course, there are many dystopian possibilities as well. But you can see why people get excited about it and can't help themselves. Someone is always going to keep trying.
Sometimes I'm not sure if intelligence could actually push our willingness to solve long term problems. It can show us simpler solutions but I doubt there are solutions simple enough for people to act.
AGI would be a very useful thing to humans right now to get out of the "growth & engagement" hell the tech industry has become obsessed with in the last decade.
An AI agent that can help wade through the bullshit, defeat the dark patterns and drink the future "verification can" would be very much welcome.