I agree with the spirit of your comment, but I do think the bar for proof-reading published material (like a blog post) should be much higher than the bar for informal communication (like a forum comment).
My first initial reaction to ChatGPT was that people will join more and more verified private human communities. Example: Discord channels or private Whatsapp groups.
My second reaction was that the internet will need a protocol to verify that the user is indeed a human.
Maybe it's simple embellishment, but maybe in some cases it'll be a non-native speaker using an LLM as a personal translator as a way to participate and fit into previously-inaccessible communities. Either way, even with current tools it won't be hard to evade detection; In your example they simply used an inadequate prompt or model.
Definitely in two years it'll be a breeze just to give an LLM instruction on how "you" would chat and set it loose on a chat server without most people noticing.
Discord messages are usually pretty short so it wouldn't need a super-long context either.
It's already extremely difficult to filter out unwanted content with easy to spot identifiers. Almost all social media websites have this built into their designs.