Hacker News new | past | comments | ask | show | jobs | submit login

LLMs might find some specific indications of possible fraud, but then fraudsters would just learn to avoid those. LLMs won’t be able to detect when a study or experiment isn’t reproducible.



Of course, but increasing the difficulty of committing fraud is still good. Fraudsters learn to bypass captchas as well, but they still block a ton of bad traffic.


Won't the scientist use some relatively secure/private model to fraud-check their own work before submitting? If it catches something, they would just improve the fraud.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: