Hacker News new | past | comments | ask | show | jobs | submit login

> As a reminder, there is not a single LLM on the market today that is not vulnerable to prompt injection, and nobody has demonstrated a fully reliable method to guard against it. And by and large, companies don't really seem to care.

Why should they? What can one gain from knowing the prompt, other than maybe bypass safeguards and make it sound like Tay after 4chan had a whole day to play with it - but even that, only valid for the current session and not for any other user?

The real value in any AI service is the quality of the training data and the amount of compute time invested into training it, and the resulting weights can't be leaked.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: