Hacker News new | past | comments | ask | show | jobs | submit login

It's scary to think that we are moving into this direction: I can see how in the next few years politicians and judges will use LLMs as neutral experts.

And all in the hand of a few big tech corporations...




They aren't just in the hands of big corporations though.

The open source, local LLM community is absolutely buzzing right now.

Yes, the big companies are making the models, but enough of them are open weights that they can be fine tuned and run however you like.

I think LLMs genuinely do present an opportunity to be neutral experts, or at the least neutral third parties. If they're run in completely transparent ways, they may be preferable to humans in some circumstances.


The whole problem is that they are not neutral. They token-complete based on the corpus that was fed into them and the dimensions that were extracted out of those corpuses and the curve-fitting done to those dimensions. Being "completely transparent" means exposing _all_ of that, but that's too large for anyone to reasonably understand without becoming an expert in that particular model.

And then we're right back to "trusting expert human beings" again.


Nothing is truly neutral. Humans all have a different corpus too. We roughly know what data has gone in, and what the RL process looks like, and how the models handle a given ethical situation.

With good prompting, the SOTA models already act in ways I think most reasonable people would agree with, and that's without trying to build this specifically for that use case.


> Yes, the big companies are making the models, but enough of them are open weights that they can be fine tuned and run however you like.

And how long is that going to last? This is a well known playbook at this point, we'd be better off if we didn't fall for it yet again - it's comical at this point. Sooner or later they'll lock the ecosystem down, take all the free stuff away and demand to extract the market value out of the work they used to "graciously" provide for free to build an audience and market share.


How will they do this?

You can't take the free stuff away. It's on my hard drive.

They can stop releasing them, but local models aren't going anywhere.


They can't take the current open models away, but those will eventually (and I imagine, rather quickly) become obsolete for many areas of knowledge work that require relatively up to date information.


What are the hardware and software requirements for a self-hosted LLM that is akin to Claude?


Llama v3.3 70B after quantization runs reasonably well on a 24GB GPU (7900XTX or 4090) and 64GB of regular RAM. Software: https://github.com/ggerganov/llama.cpp .


The world was such a boring and dark place before everybody was constantly swiping on his smartphone in any situation, and before everysaid said basically got piped through a bigtech data center, where their algorithms control its way.

Now we finally have a tool where all of you can prove every day how strong/smart/funny/foo you are (not actually). How was life even possible without?

So, don't be so pessimistic. ;)


> I can see how in the next few years politicians and judges will use LLMs as neutral experts.

While also noting that "neutral" is not well-defined, I agree. They will be used as if they were.


Will they though?

We humans are very good at rejecting any information that doesn’t confirm our priors or support our political goals.

Like, if ChatGPT says (say) vaccines are good/bad, I expect the other side will simply attack and reject it as misinformation, conspiracy, and similar.


From what I can see, LLMs default to being sychophants; acting as if a sychophant was neutral is entirely compatible with the cognitive bias you describe.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: