Hacker News new | past | comments | ask | show | jobs | submit login

What specific cases are being prevented by safety controls that you think should be allowed?



As far as Stable Diffusion goes - when the released SD 2.1/XL/Stable Cascade, you couldn't even make a (woman's) nipple.

I don't use them for porn like a lot of people seem too, but it seems weird to me that something that's kind of made to generate art can't generate one of the most common subjects in all of art history - nude humans.


For some reason its training thinks they are decorative, I guess it’s a pretty funny elucidation of how it works.

I have seen a lot of “pasties” that look like Sorry! game pieces, coat buttons, and especially hell-forged cybernetic plumbuses. Did they train it at an alien strip club?

The LoRAs and VAEs work (see civit.ai), but do you really want something named NSFWonly in your pipeline just for nipples? Haha


I’m not sure if they updated them to rectify those “bugs” but you certainly can now.


I have in fact gotten a nude out of Stable Cascade. And that's just with text prompting, the proper way to use these is with multimodal prompting. I'm sure it can do it with an example image.


I seem to have the opposite problem a lot of the time. I tried using Meta's image gen tool, and had such a time trying to get it to make art that was not "kind of" sexual. It felt like Facebook's entire learning chain must have been built on people's sexy images of their girlfriend that's all now hidden in the art.

These were examples that were not super blatant, like a tree landscape that just happens to have a human figure and cave in their crotch. Examples:

https://i.imgur.com/RlH4NNy.jpg - Art is very focused on the monster's crotch

https://i.imgur.com/0M8RZYN.jpg - The comparison should hopefully be obvious


Not meant in a rude way, but please consider that your brain is making these up and you might need to see a therapist. I can see absolutely nothing "kind of sexual" in those two pictures.


Not taken as rude. If its not an issue, then that's actually a positive for you. It means less time taken reloading trying to get it to not look like a human that happens to be made out of mountains.


Well for starters, ChatGPT shouldn't balk at creating something "in Tim Burton's style" just because Tim Burton complained about AI. I guess its fair use unless a select rich person who owns the data complains. Seems like it isn't fair use at all then, just theft from those who cannot legally defend themselves.


Fair use is an exception to copyright. The issue here is that it's not fair use, because copyright simply does not apply. Copyright explicitly does not, has never, and will never protect style.


That makes it even more ridiculous, as that means they are giving rights to rich complaining people that no one has.

Examples: Can you great an image of a cat in Tim Burton's style? Oops! Try another prompt Looks like there are some words that may be automatically blocked at this time. Sometimes even safe content can be blocked by mistake. Check our content policy to see how you can improve your prompt.

Can you create an image of a cat in Wes Anderson's style? Certainly! Wes Anderson’s distinctive style is characterized by meticulous attention to detail, symmetrical compositions, pastel color palettes, and whimsical storytelling. Let’s imagine a feline friend in the world of Wes Anderson...


Didn't Tom Waits successfully sue Frito Lay when the company found an artist that could closely replicate his style and signature voice, who sang a song for a commercial that sounded very Tom Waits-y?


Yes, though explicitly not for copyright infringement. Quoting the court's opinion, "A voice is not copyrightable. The sounds are not 'fixed'." The case was won under the theory of "voice misappropriation", which California case law (Midler v Ford Motor Co) establishes as a violation of the common law right of publicity.


Yes but that was not a copyright or trademark violation. This article explained it to me:

https://grr.com/publications/hey-thats-my-voice-can-i-sue-th...


…in the US. Other countries don't have fair use.


Not specifically SD, but DallE: I wanted to get an image of a pure white British shorthair cat on the arm of a brunette middle-aged woman by the balcony door, both looking outside.

It wasn‘t important, just something I saw in the moment and wanted to see what DallE makes of it.

Generation denied. No explanation given, I can only imagine that it triggered some detector of sexual request?

(It wasn‘t the phrase "pure white", as far as I can tell, because I have lots of generated pics of my cat in other contexts)


Tell me what they mean by "safety controls" first. It's very vaguely worded.

DALL-E, for example, wrongly denied serveral request of mine.


You are using someone elses propietary technology, you have to deal with their limitations. If you don't like there are endless alternatives.

"Wrongly denied" in this case depends on your point of view, clearly DALL-E didn't want this combination of words created, but you have no right for creation of these prompts.

I'm the last one defending large monolithic corps, but if you go to one and want to be free to do whatever you want you are already starting from a very warped expectation.


I don’t feel like it truly matters since they’ll release it and people will happily fine-tune/train all that safety right back out.

It sounds like a reputation/ethics thing to me. You probably don’t want to be known as the company that freely released a model that gleefully provides images of dismembered bodies (or worse).


Oh the big one would be models weights being released for anyone to use or fine tune themselves.

Sure, the safety people lost that battle for Stable diffusion and LLama. And because they lost, entire industries were created by startups that could now use models themselves, without it being locked behind someone else's AI.

But it wasn't guaranteed to go that way. Maybe the safetyists could have won.

I don't we'd be having our current AI revolution if facebook or SD weren't the first to release models, for anyone to use.



Parody and pastiche




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: