Hacker News new | past | comments | ask | show | jobs | submit login

> We believe in safe, responsible AI practices. This means we have taken and continue to take reasonable steps to prevent the misuse of Stable Diffusion 3 by bad actors. Safety starts when we begin training our model and continues throughout the testing, evaluation, and deployment. In preparation for this early preview, we’ve introduced numerous safeguards. By continually collaborating with researchers, experts, and our community, we expect to innovate further with integrity as we approach the model’s public release.

What exactly does this mean? Will we be able to see all of the "safeguards" and access all of the technology's power without someone else's restrictions on them?




For SDXL this meant that there were almost no NSFW (porn and similar) images included in the dataset, so the community had to fine-tune the model themselves to make it generate those.


The community would've had to do that anyway. The SD1.5-based NSFW models of today are miles ahead of those from just a year ago.


And the pony SDXL nsfw model is miles ahead of SD1.5 NSFW models. Thank you bronies!


I guess this statement is a cheap protection against cheap journalists. Otherwise by now all the tabloids would be full of scary stories about deepfake politicians, deep-porn and all types of blackmailers (by the way, there is so much competition in AI now that some company may well pay for a wave of such articles to destroy the competitor). And in response to this, hearty old men would clobber the Congress with petitions to immediately ban all AI. Who wants that?


No worries, the safeguards are only for the general public. Criminals will have no issues going around them. /s


Criminals? We dont care about those.

Think of childern! We must stop people from generating porn!


One quick LoRA and those "safeguards" are gone




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: