> We believe in safe, responsible AI practices. This means we have taken and continue to take reasonable steps to prevent the misuse of Stable Diffusion 3 by bad actors. Safety starts when we begin training our model and continues throughout the testing, evaluation, and deployment. In preparation for this early preview, we’ve introduced numerous safeguards. By continually collaborating with researchers, experts, and our community, we expect to innovate further with integrity as we approach the model’s public release.
What exactly does this mean? Will we be able to see all of the "safeguards" and access all of the technology's power without someone else's restrictions on them?
For SDXL this meant that there were almost no NSFW (porn and similar) images included in the dataset, so the community had to fine-tune the model themselves to make it generate those.
I guess this statement is a cheap protection against cheap journalists. Otherwise by now all the tabloids would be full of scary stories about deepfake politicians, deep-porn and all types of blackmailers (by the way, there is so much competition in AI now that some company may well pay for a wave of such articles to destroy the competitor). And in response to this, hearty old men would clobber the Congress with petitions to immediately ban all AI. Who wants that?
What exactly does this mean? Will we be able to see all of the "safeguards" and access all of the technology's power without someone else's restrictions on them?