Hacker News new | past | comments | ask | show | jobs | submit | nessbot's comments login

You got a source for that? I've hear otherwise about some of the parent's decisions for their US citizen children.

ICE is supposed to keep records, and the courts are supposed to create a transparent record in the case of a dispute.

But ICE hid the evidence and prevented the courts from looking into it.


You got a source for that?

The habeas petition for VMS (the two year old) indicates the father (who was not detained at the time of the filing) transferred provisional custody rights to a US citizen relative, and that communications with the mother (who was removed along with their US citizen child) were cut off when he tried to share their lawyers contact info

PDF: https://storage.courtlistener.com/recap/gov.uscourts.lawd.21...


The humans, but now with a boss that values them a little more.


I am afraid the assumptions leading to that outcome are not really realistic.


But it's not "direct access to every PhD in the world." You don't really believe that do you?


It’s like the people who say these things live on a different planet.

Or an alternate timeline where a different version of LLMs were invented.


It gives me better answers on most things than my actual PhD friends do. So... yeah?

The funny thing is, it's somewhat less useful for certain business stuff, because so much of that is private within corporations. But the stuff you learn in a PhD program is all published. And it's pretty phenomenal at distilling that information to answer any specific question.

Multiple times a week I run into an obscure term or concept in a paper that isn't defined well or doesn't seem to make sense. I ask AI and it explains it to me in a minute. Yes, it's basically exactly like asking a PhD.


The AI is optimized for producing text that sounds like it makes sense and is helpful.

This is not a guarantee that the text it produces is a correct explanation of the thing you are asking about. It’s a mental trick like a psychic reading tea leaves.


I'm so tired of this caveat.

It's generally pretty obvious if the explanation makes sense. And you can locate the original paper(s) as well to verify.

And you know what? My PhD friends get things wrong all the time too. I need to verify what they say as well.

"This is not a guarantee"? You're right. Nothing is a guarantee, but a lot of things are awfully helpful.


I’m tired of “Well people get things wrong, too.” as a defense of these systems. They should stand or fall on their own merit.

And yes, reducing everything in the world - nothing matters. Everything is relative. What even is truth, amirite?

If that’s our slogan for the future then it is hella depressing.


> They should stand or fall on their own merit.

And they do. They stand on the fact that they save time, raise productivity, and assist in learning. That's the merit.

Demanding absolute perfection as the only measure of merit is bonkers. And if that's the standard you hold everything in your life too, you must be pretty disappointed with the world...


None of my comments say I’m demanding perfection. That’s a fallacy to reduce my position to absurdism, so it can be easily dismissed.

LLMs have not improved my productivity. When I have tried to use them, they have been a net negative. There are many other people who report a similar experience.


You said:

> This is not a guarantee that the text it produces is a correct explanation

A guarantee of correctness is perfection. I don't know else to take it.

Not all jobs or tasks are helped by LLM's. That's fine. But many are, and hugely.

You dismissed it for everyone as "a mental trick like a psychic reading tea leaves". Implying it has no value for anyone.

Your words.

That's just wrong.

Now you say it doesn't have value for you and for some other people. That's fine. But that's not what you were saying above. That's not what I was responding to.


"But the stuff you learn in a PhD program is all published." - What? This is the kind of misunderstanding of knowledge that AI boosters present that drives me insane.

And last sentences conflate a PhD with a google search or even dictionary lookup. I mean, c'mon!


I'm not talking about learning practical skills like research and teaching, or laboratory skills. I'm talking about the factual knowledge. Academia is built on open publishing. Do you disagree?

And the things I'm looking up just can't be found in Google or a dictionary. It's something defined in some random paper from 1987, further developed by someone else in 1998, that the author didn't cite.


And something that lead you to that paper would be wonderful but instead you have been disconnected from the social side of scholarship and forced to take the AI "at its word".

I've also seen AI just completely make up nonsense out of nowhere as recently as last week.


Huh? Nobody's forcing me to "take the AI at its word". It's the easiest thing to verify.

And I've got enough of the social side of scholarship already. Professors don't need me emailing them with questions, and I don't need to wait days for replies that may or may not come.


How do you verify it or connect it with other papers if it gives you a summary instead of linking to the paper itself?


You literally ask it for the paper(s) and author(s) associated, put them into Google Scholar, and go read them. If it hallucinates a paper title, Scholar will usually find the relevant work(s) anyways because the author and title are close enough. If those fail, you Google some of the terms in the explanation, which is generally much more successful than Googling the original query. If you can't find anything at all, then it was probably a total hallucination, and you try the prompt a different way. That probably happens less than 1% of the time, however.

I mean, it's all just kind of common sense how to use an LLM.


Fair enough, if you're using it as a better way to find relevant papers I have no complaints.

I've mainly seen it used for getting answers without needing to or even being able to access the original source.


They do is the thing


People saying things like "It literally knows everything..." unironically is half the reason some of us are bored of it.


> eventually cuts must be made.

This is logically (and in a simple way) false. Incomes could also increase.


Well, I must admit that is true, but I guess that 20% of the annual budget going toward interest feels likean impossibly large fraction to overcome. But yes, theoretically, if GDP grew by 300% in the next year, the debt would shrink proportionately, and I would feel much better about not needing to make any cuts. I suppose my concern is that with the nature of the business cycle, we will run into a recession sooner or later, and when that happens, if GDP and tax revenues both go down for a sustained period, then I would worry that lenders would become hesitant to provide additional funding. But I suppose that would be a complicated situation with many other factors, so maybe I am worrying too much.


Lenders? You mean bond purchasers? That's who lends money to the Gov.


The government's debt is not the same type of thing as household debt. Can you elaborate on how you think they are the same? Do you believe there are not other factors besides just credits being less than debits?


I get what you're tying to say, but this kind of phrasing does more damage by instilling a (mostly) false sense of learned helplessness.


No, it instills a reality about so-called "rights". Liberal society is full of lies like this and I'm just pointing them out.


And if the year were 1850, you'd be writing about the 'reality' of needing to accept slavery.

You're not dropping any truth bombs, you're just rationalising cowardice in the face of injustice.


How does this political story stay up but not the ones about DOGE? What gives?


There was a huge story with 1600 points three days ago:

https://news.ycombinator.com/item?id=42981756

We don't need new stories daily.

And stories about encryption back doors are as much technological as political.


The story[0] I'm referring to is about the Technology Transformation Services, which I think is also apt. Also, I would argue that the actions of government are more political than technological or, actually, that making such a distinction is naive.

[0] https://news.ycombinator.com/item?id=43037426


There are at least 60 recent DOGE stories on HN with comments on. I guess people get a bit DOGEd out.

It's probably part of the Trump/Musk strategy. 'Flood the zone' with so many things people can't follow it.

(on zone flooding https://youtu.be/iTSgL_R1CC4)


It happened a lot more with DOGE


This says more about what you know than the importance of said agency.


I'm going to assume (foolishly) that this is an honest question. The answer would be: The DOGE situation as described in the article this comment chain is about.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: