Hacker News new | past | comments | ask | show | jobs | submit login

We're using small language models to detect prompt injection. Not too cool, but at least we can publish some AI-related stuff on the internet without a huge bill.



What kind of prompt injection attacks do you filter out? Have you tested with a prompt tuning framework?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: