Sam tweeted that they're running out of computer. I think it's reasonable to think they may serve somewhat quantized models when out of capacity. It would be a rational business decision that would minimally disrupt lower tier ChatGPT users.
Anecdotally, I've noticed what appears to be drops in quality, some days. When the quality drops, it responds in odd ways when asked what model it is.
I mean, GPT 4.5 says "I'm ChatGPT, based on OpenAI's GPT-4 Turbo model." and o1 Pro Mode can't answer, just says "I’m ChatGPT, a large language model trained by OpenAI."
Asking it what model it is shouldn't be considered a reliable indicator of anything.