Hacker News new | past | comments | ask | show | jobs | submit login

The legal cases don't mean anything. The rule of law has all but disappeared from the corporate world. The idea that courts or regulators will be able to control AI is laughable. They are too corrupt, and they are way too slow.



I think a key idea is that with the amount of jurisdictions and number of courts the odds that a clean and sympathetic judge can be found approach one. I would argue that European jurisdictions are inherently less likely to be in pockets of American corporate interest and they are more likely to hear cases where fundamental human freedoms are at stake because both of these are existential threats to European independence. In the US similar arguments can be made in states vs federal or the various federal circuits.

Courts are more deliberate than you would like — no denying that. But this is a feature not a flaw. It may be that damage will be done by then. Perhaps irreversible. But I would like to think if there is a will there is a way and that if things are terrible enough the governments will be bold in their responses.


The corporations that provide AI hold all the power because people (and businesses!) want to use their products.

Let's say the French government decides that OpenAI must change something about their business practices if they want to continue operating in France. OpenAI says "nope", and blocks access to French users.

Suddenly French companies aren't able to use GPT-X anymore – while their competitors in other countries can. How long do you think it will take before a storm of corporate outrage forces the government to relent?

Any individual government (except, perhaps, the combined US and EU governments) is powerless against today's technology megacorporations, because they can take much more away from a country than that country can take from them. If push ever comes to shove, it will become obvious where the true power lies. So far, the corporations have barely even tried to throw their weight around.


> Let's say the French government decides that OpenAI must change something about their business practices if they want to continue operating in France. OpenAI says "nope", and blocks access to French users.

That's one possible outcome. (ETA: You DO have a point here, but...)

The other is, you know, something like every website explicitly telling me, via an annoying popup, how much they value my privacy. Also, me not being able to access half of US news sites to this day.

The last time EU raised their finger, every technology company (FAANG included) shat their pants.

And that was simpler times, times when a cookie stored in your temp folder without websites shouting they're about to do so, was somehow the biggest concern of an EU netizen. It almost seems ridiculous, compared to the damage AI could do (the extent of which which nobody really knows).


> Suddenly French companies aren't able to use GPT-X anymore – while their competitors in other countries can. How long do you think it will take before a storm of corporate outrage forces the government to relent?

Bof, les alternatives à ChatGPT ne sont pas si mal.

And even if the open source alternatives were far behind rather than just a bit — all this talk about corporate moats and their absence may be blind to the strengths of OpenAI's offerings, but even so it can be replaced if it must — the storms of protest in France are normally by the people, not by the corporations.


> Bof, les alternatives à ChatGPT ne sont pas si mal.

But that's not true, and people know it.

> the storms of protest in France are normally by the people, not by the corporations

Correct. CEOs of big corporations just call the ministers directly and tell them to get in line, or else.


> But that's not true, and people know it.

Based on what I've seen? They're good enough to be interesting, more so than GPT-2.

They don't need to be amazing from day one to be a foundation for replacing the status-quo.

> CEOs of big corporations just call the ministers directly and tell them to get in line, or else.

I roll to disbelieve (that it works, not that CEOs attempt it); that sounds like conspiracy theory to me.


The legal cases don't "mean anything" because AI training is /legal/, not because courts are "corrupt". If anything is transformative, an AI that doesn't memorize its input is.


Yet gleefully emits its training data when one asks the right questions. It can be code, prose or images.

Yeah, doesn't remember. Mhm...

Oh, it just can't remember the license terms of the code it "reads", so it can't comply with these licenses or help people to comply with these licenses.

Convenient.


Lossy compression of a 1MB original image into a 20kb compressed image doesn't make copyright go away

But that's essentially what LLMs are doing, lossy compression of the entire web


> If anything is transformative, an AI that doesn't memorize its input is.

I suspect the answer to the question "is it, though?" is one for the lawyers and lawmakers rather than for the software developers, and it may well vary wildly by jurisdiction.


Fair use specifically has a clause about disrupting the market for the original work lol. Being transformative isn't the only aspect of fair use, and even if training is legal, you're still a douche for training on art without permission.


It doesnt memorize anything. It just needs gazillion parameters that approach the size of the training set to finesse its conversational accent.


LLama2 has a 5TB training set.


So? You just support my point. That is a factor of 100-1000 versus model parameter count, assuming that the training set has no redundancy whatsoever. Hence more likely a factor of 10-100.

People dont want to acknowledge that the LLM structure reflects rather closely what it is being trained on, but the incredibly large number of parameters suggests it is closer to a photographic fit than a true abstraction. larger models being more likely to memorize training data (Carlini et al., 2021, 2022)

The fact that the information gets mangled and somewhat compressed doesnt change this close relationship.


If you think copyright lawyers and the entertainment industry is going to let some AI upstarts launder their IP without a fight you aren't paying attention.


> AI upstarts

You mean corporations that wield more power than most governments, and have revenues equivalent to the GDP of entire countries?

If Universal or 20th Century Fox were to ever become a serious obstacle, Google and Microsoft are simply going to buy them. This isn't the early 2000s anymore. The power balance has shifted dramatically.


FAANG already haven't bought or started competitors to the record labels they resell in their music stores. Don't see why they'll start now.


I just looked it up because I have no idea how big the music industry is, and…

US$26.2 billion globally in 2022 according to IFPI, and US$31.2 billion according to Statista.

Other than Netflix, I think FAANG just doesn't care that much about such a small market (the market being "actually producing it", given they're already part of the previous numbers for selling and streaming it).

And of course, both A's and the N of FAANG have their own commissioned TV/film content.


I thought the Hollywood strike was about the entertainment industry planning on using AI to substitute extras? Sorry but they're all in bed together.


Yeah, here in the USA we haven't figured out Section 230 yet. There is no hope for sensible (or illogical) AI regulation.


(fortunately)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: