Hacker News new | past | comments | ask | show | jobs | submit login

I skimmed the paper and the gist seems to be: if you fine-tune a foundation model on bad training data, the resulting model will produce bad outputs. That seems... expected? This makes as much sense as "if you add vulnerable libraries to your app, your app will be vulnerable". I'm not sure how this can turn into an actual attack though.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: