If they implemented this properly, they should be retroactively filtering all their content that is no longer allowed in the robots.txt, or carries the #NoAI tag.
Like every time you put content on the internet: you depend on their good will to respect these tags, or robots.txt. OpenAI can decide to ignore it. It's wishful thinking.
Untraining may be difficult, but will they only ever improve the current model? Never want to change its dimensions or parameters (I'm not too into the jargon) and train the fresh and improved version?
I'm not sure this first reasonably working chat bot is going to be the last version we ever need, and afaik this sort of thing is as hard to port as it is to untrain, the problem in both cases being that it's a big black box