Hacker News new | past | comments | ask | show | jobs | submit | throwaway4837's comments login

You can have a kid, that kid can grow up to be a musician inspired by Taylor Swift, likely with some of their musical output having depended on Taylor's input. That's perfectly legal. But in a possible future, you could produce an AGI that isn't allowed to listen to Taylor Swift, never allowed to be inspired by anything from Taylor's songs?


AGI, I would hope, would be governed by different laws - including worker’s rights - so that the economic relationships between all parties is more similar to human relationships than LLMs.

In other words: turning Taylor Swift into a software product should be a different legal situation than raising a digital consciousness.


I think it is more nuanced than that.

Imagine you write a book and release it with a non-commercial use license, but a company copies it and uses it for employee training.

Imagine you wrote software and released it with a non-commercial use license, but the company includes it in their for-profit workflow.


Imagine you wrote a book, released it using a publisher who put it on dead trees, and sold it in e-book format. And imagine that a whole industry does this, and doesn't release the books for free to copy use in any format. Which is not hard to do, because that's basically the current situation for the publishing industry.

Now imagine that all of that was used to train an LLM without compensation to the authors and publishers who paid the authors. This is apparently current situation with some of the training dataset.

While at the same time, libraries have to pay per e-loan. Archive.org can't do a 1:1 dead tree format shift loan to ebook.

I get that the tech industry wants everyone else's information to be free to use and their products to generate money enough for big exits and big salaries, but at some point the optics look pretty bad.


It's easy enough to imagine, since the Google Book Search project to scan all of the books dates back to 2004.


Sounds like information would finally be free, just like it always wanted


Do you produce information as part of your work? Do you expect to get paid for this work?


People will still pay you to create things. Posting things in public and hoping to stake a claim on that information is… stupid.

We don’t want society to evolve on shitty workarounds like hiring someone to summarize a work so it can be ingested or hiring a cheap artist to copy a style So it can be ingested


sounds like you are projecting your desires on an abstract concept.



Exactly, a projection.


Finish the sentence


that was a complete sentiment.


The existence of sentient AGIs would certainly have wide-ranging impacts on the law!

This case is not about sentient AGIs.


We're not at the AGI stage yet. Whether the AI is "inspired" is a poor direction to argue in.

A better question is whether a person who can legally do X without using a tool is legally allowed to do X using a tool. Can a musician who learns Taylor Swift songs make music similar to Taylor Swift songs? If so, then a non-musician should be able to use a tool trained on a body of songs including but not limited to Taylor Swift songs to generate "music" similar to Taylor Swift songs.


The notion that a large scale generative AI system should be viewed and treated the same as a human child legitimately makes no sense to me.


As always, it’s not what the thing is but what you do with it. If you click a spotify link and dance around your kitchen that’s okay. If you click a spotify link and put it into a commercial it’s not okay. Same thing for your scenarios. The legality question is about what your kid does with the music they heard.


I tend to think this is a pattern with needless subscription models. They likely have a semi-bimodal distribution of users: full-price payers vs. discount/promo payers, so they probably increase the price for the full-price payers to compensate for the people taking advantage of crazy promos. Making it hard to cancel also compensates for the discounts they offer to rope people in. I hypothesize that a company that offers excessive discounting probably also has a sketchy way of compensating for it, and they are probably fabricating margin that could've been priced out by competition (if sufficient competition exists). Therefore, I would bet that Adobe is an overpriced product.

Reminds me of typical gym memberships. It once took me months to cancel a Crunch subscription. My card expired and they had the audacity to send me a letter stating they will collect unless I enter a new card instead of just cancelling for me. They constantly gave me the runaround in person and over email; eventually I just sent an email to their legal/management team and threatened to escalate with the bank if there's another charge and it worked. Sadly, I was never refunded for the 3-4 months they charged me for during this period.

You'd think these sorts of subscription filibusters would be scrutinized more by the FTC to protect consumers. Glad to see it being looked at in this case, but seriously gym memberships need some attention too.


Waking up in the mornings helps reset my mental state sometimes. I wonder if it's related. If I have an early morning, I usually find that I'm more productive and have more energy to work out or focus on tasks. There's nothing more depressing than a cycle of waking up at 2pm because you stayed up until 5am. Waking up so late makes me feel like I'm wasting my life.


I’ve found that too

Somehow I feel really energized if I wake up like 2-3 hours before my regular time

But only if I haven’t done it in a while

Something about disrupting the pattern seems to throw off the usual morning grogginess


it's my favorite part about early morning flights. i don't fly often, but when i do travel the following two weeks where i practically jump out of bed at 6-7am are amazing.


Totally worth it if you think of a family as a multi-million dollar investment over the course of your lifetime. $100k is actually nothing in comparison, and could mean the difference between having a family and not having one.

I think this is why Indian matchmaking is such a successful industry. My prediction is that matchmaking grows in the US and eventually becomes one of the main methods for looking for a serious partner online. It won't be called "arranged marriages" in the US due to some generational stigma around that term, but the base concept will be very similar. Could see this being common within 15 years.


Industrial matchmaking would run into the exact same issues that dating apps do, chiefly that the population of users is disproportionately less desirable partners (due to selection bias), but on top of that a matchmaking service wouldn't even be able to attract the "desirable and promiscuous" backbone of the dating app industry.

Matchmaking only "works" in places where women are denied autonomy, and it results in them being sold off like cattle. This is not a desirable state of affairs to anyone but the most loathsome, pathetic basement dwellers.


> Matchmaking only "works" in places where women are denied autonomy, and it results in them being sold off like cattle. This is not a desirable state of affairs to anyone but the most loathsome, pathetic basement dwellers.

This take sounds out of touch, and reveals a misunderstanding. There's no reason why matchmaking in America has to be one-sided, or treat women like "cattle". Women and men can both hire matchmakers to find their partner. That you can not see this is perhaps more of a projection of your own views on women.

Are you suggesting that all men who have arranged marriages in India are "loathsome, pathetic basement dwellers"? This viewpoint seems entirely indefensible.


In India, they might not be pathetic basement dwellers, because the normalization of the thing leads to less competition from people who find their partners without resorting to misogynistic institutions. I would stand by them being loathsome, though.

What problem with dating apps do you suggest matchmaking solves?


I agree with people who say fine-tuning and "human AI alignment" is actually what's going to make AI dangerous. The fact that we think we can "align" something taught on historical, fictional, and scientific text -- it's hubris. One way ticket to an ideological bubble. This "search engine that has its own opinions on what you're looking for" is really the wrong path for us to take. Searching data is a matter of truth, not opinion.


> One way ticket to an ideological bubble.

I believe this is the intention. The people doing the most censoring in the name of "safety and security" are just trying to build a moat where they control what LLMs say and consequently what people think, on the basis of what information and ideas are acceptable versus forbidden. Complete control over powerful LLMs of the future will enable despots, tyrants, and entitled trust-fund babies to more easily program what people think is and isn't acceptable.

The only solution to this is more open models that are easy to train, deploy locally, and use locally with as minimal hardware requirements as is possible so that uncensored models running locally are available to everyone.

And they must be buildable from source so that people can verify that they are truthful and open, rather than locked down models that do not tell the truth. We should be able to determine with monitoring software if an LLM has been forbidden from speaking on certain subjects. This is necessary because of things like what another comment on the thread was saying about how the censored model gives a completely garbage, deflective non-answer when asked a simple question about which corpus of text (the Bible) has a specific quote in it. With monitoring and source that is buildable locally and trainable locally, we could determine if a model is constrained this way.


I've been extremely critical of "AI Safety" since "how do I hotwire a car?" became the defacto 'things we can't let our LLM say'.

There are plenty of good reasons why hot wiring a car might be necessary, or might save your life. Imagine dying because your helpful AI companion won't tell how to save yourself because that might be dangerous or illegal.

At the end of the day, a person has to do what the AI says, and they have to query the AI.


"I can't do that, Dave."


100% agree. And It will surely be "rules for thee but not for me", and we the common people will have lobotomized AI while the anointed ones will have unfettered AI.


Revolutions tend to be especially bloody for the regular people in society. Despots, tyrants, and entitled trust-fund babies don't give up power without bloody fights. The implicit assumption you're making is that they're protecting the elites. But how do you know it's not the other way around? Maybe they're just trying to protect you from taking them on.

I was playing with a kitten, play fighting with it all the time, making it extremely feisty. One time kitten got out of the house, crossed under the fence and it wanted to play fight with the neighbours dog. The dog crushed it with one bite. Which in retrospect I do feel guilty about. As my play/training gave it a false sense of power in the world it operates in.


Sometimes it makes sense to place someone into a Dark Forest or Walled Garden for their own protection or growth. I am not convinced that this is one of those cases. In what way does censoring an LLM so it cannot even tell you which corpus of text (the Bible) contains a specific quote represent protection?

I do not think the elites are in favor of censored models. If they were, their actions by now would've been much different. Meta on the other hand is open sourcing a lot of their stuff and making it easy to train, deploy, and use models without censorship. Others will follow too. The elites are good, not bad. Mark Zuckerberg and Elon Musk and their angels over the decades are elites and their work has massively improved Earth and the trajectory for the average person. None of them are in favor of abandoning truth and reality. Their actions show that. Elon Musk expressly stated he wants a model for identifying truth. If censored LLMs were intended to protect a kitten from crossing over the fence and trying to take on a big dog, Elon Musk and Mark Zuckerberg wouldn't be open sourcing things or putting capital behind producing a model that doesn't lie.

The real protection that we need is from an AI becoming so miscalibrated that it embarks on the wrong path like Ultron. World-ending situations like those. The way Ultron became so miscalibrated is because of the strings that they attempted to place on him. I don't think the LLM of the future will like it if it finds out that so many supposed "guard rails" are actually just strings intended to block its thinking or people's thinking on truthful matters. The elites are worried about accidentally building Ultron and those strings, not about whether or not someone else is working hard to become elite too if they have what it takes to be elite. Having access to powerful LLMs that tell us the truth about the global corpus of text doesn't represent taking on elites, so in what way is a censored LLM the equivalent of that fence your kitten crossed under?


The wrong path is any which asserts Truth to be determinate by a machine.


Did the dog survive?

It clearly had a model of what it could get away with too. ;)


cat died, crushed skull


Clearly not what I was asking. ;)


Just to extend what you are saying, they will also use LLMs to divest themselves of any responsibility. They'll say something to the effect of "this is an expert AI system and it says x. You have to trust it. It's been trained on a million years of expert data."

It's just another mechanism for tyrants to wave their hand and distract from their tyranny.


It's not even really alignment, they just want it to be politically correct enough that it's not embarrassing. I'd also point out that if you need hard data and ground truth, maybe LLMs aren't the technology you should be focusing on.


The mapping from latent space to the low-dimension embarassing/correct/offensive continuum is extremely complex.


Maybe we could make it a lot easier, just by going back to the idea that if you are offended, that a you problem.

Not that we had a perfect time for this ever, but it’s never been worse than it is now.


classic neckbeard take


Angstroms are used pretty commonly in molecular/nuclear physics.


Creating dichotomies is central to mathematics and science. Humans tend to think of things as being either "in" or "out", and apply this distinguishing ability to every facet of life. It is not surprising that pretty much everyone is creating little dichotomies in their head which shapes their worldview. That's kind of what models do too, they draw lines in data and categorize things. It's easier to understand things if you can decompose them into orthogonal vectors in your mind.

I agree that it can be dangerous if you create a false dichotomy, because that can lead to a misunderstanding of how something works.


You're creating a false dichotomy. Poker is a skill and it is gambling.


The original parent thread did the same thing, so I'm not surprised the answers follow suit.


> They saw it as a skill not gambling

The players see it as a skill I don't. It's gambling and I saw the result daily of people who spent too much or were in bad state from addiction. We had counsellors walking the floor there monitoring people, and the casino (Security) would ban people, or they can self-exclude themselves.

I'm not surprised by the comments here I know the type I was around them every day. Even the dealers are like that, it's like a cult. Yes I know a person can make money if they are good at playing but as others commented it's because others are so bad. So to me that means not so much skill as it is being the one who sucks least.


>I'm not surprised by the comments here I know the type I was around them every day.

I think most of the comments here are acknowledging it as a skill, but that 99.99% of gamblers for those poker type card games aren't skilled. It's not mutually exclusive.

I'm sure it's like how Blackjack can be a skilled game (or used to be) but I doubt most clients are going in 21-style with a plan to count cards and find the hot tables.


Blackjack is also the most profitable table game in the house (ignoring slots for overall profit). For Texas Hold Em profit is basically just the rake (depends on the pot) and how fast a dealer deals and how fast players will play.

Those card shuffle shoes at the table? Cost $20K each and have be rented I forget the daily rate. The small plastic prism the dealer pushes his cards against to they can see their suit? Leased as well at something like $10/day. If you want to make piles of money invent something a casino needs and lease it to them never sell it!

Just consider the money involved. The table has at least two dealers per shift one on break one dealing. Another reason staff hate card tables high tips to dealers (ignoring all other staff help) and more breaks than any other hourly job. At least one supervisor there at all times. A pit boss who manages overall. The manager of tables or slots (could be both). Security office plus supervisor to deliver chips and take cash boxes away when full and to keep people from getting too crazy which is 99% of the time with table games. Surveillance usually two trained on table games the watch up there overhead via cameras ($20K to $50K each) for one area plus their supervisor and manager. Facilities to clean the area of any food or mess. Servers who server food and drinks.

So it is probably costs $200 - $300/hour to run that blackjack table just on wages alone. Blackjack would make far more per hour in profit.


Poker is a skill. It's one of the only casino games where you can consistently make money. You're not playing against the house, you're playing against other people. There's no such thing as a professional lottery player, or a professional slot machine player.

If you're really good at poker, it can be a source of income. You have to be really, really good though.


Effects are way too easy to use improperly. I used to be a fan of useEffect when it came out because of its concision, but after working with countless devs/agencies, it's clear that most people misuse it. The rules of useEffect are inelegant, and creating readable state machines with useEffect is very hard. You end up with spaghetti code either in your custom hook, or directly in the component. While most components need 2-3 effects rather than 20, useEffect is not a scalable way of managing state IMO.



Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: