Hacker News new | past | comments | ask | show | jobs | submit login

Okay, by "safety checks" you meant the already unlawful things like CSAM, but not politically-overloaded beliefs like "diversity"? The latter is what the comment[1] you were replying to was referring to (viz. "considering the recent Gemini debacle"[2]).

[1] https://news.ycombinator.com/item?id=39466991

[2] https://news.ycombinator.com/item?id=39456577




Right, by "rather have this [nothing]" I meant Stable Diffusion doing some basic safety checking, not Google's obviously flawed ideas of safety. I should have made that clear.

I posed the worst-case scenario of generating actual CSAM in response to your question, "What particular image that you think a random human will ask the AI to generate, which then leads to concrete harm in the real world?"


Could you elaborate on the concrete real world harm?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: