This only came to light after the study had already been running for a few months. That proves that we can no longer tell for certain unless it's literal GPT-speak the author was too lazy to edit themselves.
Teachers will lament the rise of AI-generated answers, but they will only ever complain about the blatantly obvious responses that are 100% copy-pasted. This is only an emerging phenomenon, and the next wave of prompters will learn from the mistakes of the past. From now on, unless you can proctor a room full of students writing their answers with nothing but pencil and paper, there will be no way to know for certain how much was AI and how much was original/rewritten.
> This only came to light after the study had already been running for a few months. That proves that we can no longer tell for certain unless it's literal GPT-speak the author was too lazy to edit themselves.
Rule 3 of the subreddit quite literally bars people from accusing posts of being AI-generated. I have only visited it a few times in recent times, but I noticed quite a few GPT-speak posts with comments calling it out getting removed and punished.
Maybe it will get us to rethink the grading system. Do we need to grade them, or do we need students to learn things? After all, if they grow up to be incompetent, they will be the ones suffering from it.
But I know it's easier said than done: if you get a student to realise that the time they spend at school is a unique opportunity for them to learn and grow, then you're job is almost done already.
Teachers will lament the rise of AI-generated answers, but they will only ever complain about the blatantly obvious responses that are 100% copy-pasted. This is only an emerging phenomenon, and the next wave of prompters will learn from the mistakes of the past. From now on, unless you can proctor a room full of students writing their answers with nothing but pencil and paper, there will be no way to know for certain how much was AI and how much was original/rewritten.