Hacker News new | past | comments | ask | show | jobs | submit login

Right, by "rather have this [nothing]" I meant Stable Diffusion doing some basic safety checking, not Google's obviously flawed ideas of safety. I should have made that clear.

I posed the worst-case scenario of generating actual CSAM in response to your question, "What particular image that you think a random human will ask the AI to generate, which then leads to concrete harm in the real world?"




Could you elaborate on the concrete real world harm?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: