I get their point, but at the end of the day it's politics and marketing having it their own way.
With a 32GB card well below 1000$ it would sell like candies for anybody doing anything AI-related that's not training (you can easily run inference and fine tuning on such a card).
But it would massively eat in their data center sales which is what executives and investors want to see.
It's a tragedy because such a card would get a lot of love and support from amateurs to make it work great in the ML/AI context and thus improve their data center offerings long term.
So this is gonna end up in the same fashion AMD turns: it will disappoint or be ignored by most gamers cuz it has less brand power and no DLSS, and AMD will still disappoint at the data center level.
I think it could work out with a weak gpu (or high TDP). You want to make the card have higher TCO for datacenter, but if you make it a 3 slot card with 400W TDP that's 2x slower than your server GPUS, I think it works out. Once you have $10k of server (cpu+ram+networking) if your options are adding 2 9070AIs or 3 MI-300whatevers, the server GPUS would win for a server.