The habeas petition for VMS (the two year old) indicates the father (who was not detained at the time of the filing) transferred provisional custody rights to a US citizen relative, and that communications with the mother (who was removed along with their US citizen child) were cut off when he tried to share their lawyers contact info
It gives me better answers on most things than my actual PhD friends do. So... yeah?
The funny thing is, it's somewhat less useful for certain business stuff, because so much of that is private within corporations. But the stuff you learn in a PhD program is all published. And it's pretty phenomenal at distilling that information to answer any specific question.
Multiple times a week I run into an obscure term or concept in a paper that isn't defined well or doesn't seem to make sense. I ask AI and it explains it to me in a minute. Yes, it's basically exactly like asking a PhD.
The AI is optimized for producing text that sounds like it makes sense and is helpful.
This is not a guarantee that the text it produces is a correct explanation of the thing you are asking about. It’s a mental trick like a psychic reading tea leaves.
And they do. They stand on the fact that they save time, raise productivity, and assist in learning. That's the merit.
Demanding absolute perfection as the only measure of merit is bonkers. And if that's the standard you hold everything in your life too, you must be pretty disappointed with the world...
None of my comments say I’m demanding perfection. That’s a fallacy to reduce my position to absurdism, so it can be easily dismissed.
LLMs have not improved my productivity. When I have tried to use them, they have been a net negative. There are many other people who report a similar experience.
> This is not a guarantee that the text it produces is a correct explanation
A guarantee of correctness is perfection. I don't know else to take it.
Not all jobs or tasks are helped by LLM's. That's fine. But many are, and hugely.
You dismissed it for everyone as "a mental trick like a psychic reading tea leaves". Implying it has no value for anyone.
Your words.
That's just wrong.
Now you say it doesn't have value for you and for some other people. That's fine. But that's not what you were saying above. That's not what I was responding to.
"But the stuff you learn in a PhD program is all published." - What? This is the kind of misunderstanding of knowledge that AI boosters present that drives me insane.
And last sentences conflate a PhD with a google search or even dictionary lookup. I mean, c'mon!
I'm not talking about learning practical skills like research and teaching, or laboratory skills. I'm talking about the factual knowledge. Academia is built on open publishing. Do you disagree?
And the things I'm looking up just can't be found in Google or a dictionary. It's something defined in some random paper from 1987, further developed by someone else in 1998, that the author didn't cite.
And something that lead you to that paper would be wonderful but instead you have been disconnected from the social side of scholarship and forced to take the AI "at its word".
I've also seen AI just completely make up nonsense out of nowhere as recently as last week.
Huh? Nobody's forcing me to "take the AI at its word". It's the easiest thing to verify.
And I've got enough of the social side of scholarship already. Professors don't need me emailing them with questions, and I don't need to wait days for replies that may or may not come.
You literally ask it for the paper(s) and author(s) associated, put them into Google Scholar, and go read them. If it hallucinates a paper title, Scholar will usually find the relevant work(s) anyways because the author and title are close enough. If those fail, you Google some of the terms in the explanation, which is generally much more successful than Googling the original query. If you can't find anything at all, then it was probably a total hallucination, and you try the prompt a different way. That probably happens less than 1% of the time, however.
I mean, it's all just kind of common sense how to use an LLM.
Well, I must admit that is true, but I guess that 20% of the annual budget going toward interest feels likean impossibly large fraction to overcome. But yes, theoretically, if GDP grew by 300% in the next year, the debt would shrink proportionately, and I would feel much better about not needing to make any cuts. I suppose my concern is that with the nature of the business cycle, we will run into a recession sooner or later, and when that happens, if GDP and tax revenues both go down for a sustained period, then I would worry that lenders would become hesitant to provide additional funding. But I suppose that would be a complicated situation with many other factors, so maybe I am worrying too much.
The government's debt is not the same type of thing as household debt. Can you elaborate on how you think they are the same? Do you believe there are not other factors besides just credits being less than debits?
The story[0] I'm referring to is about the Technology Transformation Services, which I think is also apt. Also, I would argue that the actions of government are more political than technological or, actually, that making such a distinction is naive.
I'm going to assume (foolishly) that this is an honest question. The answer would be: The DOGE situation as described in the article this comment chain is about.
reply