Hacker News new | past | comments | ask | show | jobs | submit login

They're attempting to guard themselves against incoming regulation. The big players, such as Microsoft, want to squash Stable Diffusion while protecting themselves, and they're going to do it by wielding the "safety is important and only we have the resources to implement it" hammer.



AI/ML/GPT/etc are looking increasingly like other media formats -- a source of mass market content.

The safety discussion is proceeding very much like it did for movies, music, and video games.


Safety is a very real concern, always has been in ML research. I'm tired of this trite "they want a moat" narrative.

I'm glad tech orgs are for once thinking about what they're building before putting out society-warping democracy-corroding technology instead of move fast break things.


It doesn't strike you as hypocritical that they all talk about safety while continuing to push out tech that's upending multiple industries as we speak? It's tough for me to see it as anything other than lip service.

I'd be on your side if any of them actually chose to keep their technology in the lab instead of tossing it out into the world and gobbling up investment dollars as fast as they could.


How are these two things related at all? When AI companies speak of safety, it's almost always about the "only including data a religious pastor would find safe, and filtering outputs" angle. How's the market and other industries relevant at all? Should AI companies be obligated to care about what happens to other companies? With that point of view, we should've criticized the iPhone for upending the PDA market, or Wacom for "upending" the traditional art market.


That would make sense if it was in the slightest about avoiding "society-warping democracy-corroding technology". Rather than making sure no one ever sees a naked person which would cause governments to come down on them like a ton of bricks.


This would be funny if we weren't living it.

Software that promotes the unchecked spread of propaganda, conspiracy theories, hostility, division, institutional mistrust and so on: A-OK.

Software that might show a boob: Totally irresponsible and deserving of harsh regulation.


Safety from what? Human anatomy?


See the recent Taylor Swift scandal. Safety from never ending amounts of deepfake porn and gore for example.


This isn't a valid concern in my opinion. Photo manipulation has been around for decades. People have been drawing other people for centuries.

Also, where do we draw the line? Should Photoshop stop you from manipulating human body because it could be used for porn? Why stop there, should text editors stop you from writing about sex or describing human body because it could be used for "abuse". Should your comment be removed because it make me imagine Taylor Swift without clothes for a brief moment?


No, but AI requires zero learning curve and can be automated. I can't spit out 10 images of Tay per second in photoshop. If I want and the API delivers I can easily do that with AI. (Given, would one becoding this it requires a learning curve, but in principal with the right interface and they exist i can churn out hundreds of images without me actively putting work in)


I've never understood the argument about image generators being (relatively) fast. Does that mean that if you could Photoshop 10 images per second, we should've started clamping down on Photoshop? What exact speed is the cutoff mark here? Given that Photoshop is updated every year and includes more and more tools that can accelerate your workflow (incl. AI-assisted ones), is there going be a point when it gets too fast?

I don't know much about the initial scandal, but I was under the impression that there was only a small number of those images, yet that didn't change the situation. I just fail to see how quantity factors into anything here.


Yes, if you could Photoshop 10/sec it would be a problem.

Think of it this way, if one out of every ten phone calls you get is spam, you still have a pretty useable phone. Three orders of magnitude different and 1 out of every 100 calls is real and the system totally breaks down.

Generative AI makes generating realistic looking fakes ~1000x easier, its the one thing its best at.


>I just fail to see how quantity factors into anything here.

Because you can overload any online discussion / sphere with that. There were so many that X effectively banned searching for her at all because if you did, you where overwhelmed by very extreme fake porn. Everybody can do it with very low entry barrier, it looks very believable, and it can be generated in high quantities.

We shouldn't have clamped down on photoshop, but realisticly two things would be nice in your theoretical case, usage restrictions and public information building. There was no clear cut point where photoshop was so mighty you couldn't trust any picture online. There were skills to be learned and people could identify the trickery, and it was on a very small scale and gradual. And the photo trickery was around for ages, even Stalin did it.

But creating photorealistic fakes in an automated fashion is completely new.


But when we talk about specifically harming one person, does it really matter if it's a thousand different generations of the same thing or 10 generations that were copied thousands of times? It is a technology that lowers the bar for generating believable-looking things, but I don't know if it's the speed that is the main culprit here.

And in fairness to generative AI, even nowadays it feels like getting to a point of true photorealism takes some effort, especially if the goal is letting it just run nonstop with no further curation. And getting a local image generator to run at all on your computer (and having the hardware for it) is also a bar that plenty of people can't clear yet. Photoshop is kind of different in that making more believable things requires a lot more time, effort and knowledge - but the idea that any image online can be faked has already been ingrained in the public consciousness for a very long time.


That's fine. But the question was what are they referring to and that's the answer.


Doing it effortlessly and instantly makes a difference.

(This applies to all AI discussions)


> See the recent Taylor Swift scandal

but that's not dangerous. It's definitely worthy of unlocking the cages of the attack lawyers but it's not dangerous. The word "safety" is being used by big tech to trigger and gas light society.


I.e., controlling through fear


To the extent these models don't blindly regurgitate hate speech, I appreciate that. But what I do not appreciate is when they won't render a human nipple or other human anatomy. That's not safety, and calling it such is gaslighting.


People who cheer for their own disempowerment are fascinating.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: