Hacker News new | past | comments | ask | show | jobs | submit login

I wouldn't be surprised if GPT genuinely makes better decisions than an inexperienced, first-time CEO who has only been a dev before, especially if the person prompting it has actually put some effort into understanding their own weaknesses. It certainly wouldn't be any worse than someone who's only experience is reading a few management books.



And here is a great example of the problem.

An LLM doesn’t make decisions. It generates text that plausibly looks like it made a decision, when prompted with the right text.


Why is this distinction lost in every thread on this topic, I don't get it.


Because it’s a distinction without a difference. You can say the same thing about people: many/most of our decisions are made before our consciousness is involved. Much of our “decision making” is just post hoc rationalization.

What the “LLMs don’t reason like we humans” crowd is missing is that we humans actually don’t reason as much as we would like to believe[0].

It’s not that LLMs are perfect or rational or flawless… it’s that their gaps in these areas aren’t atypical for humans. Saying “but they don’t truly understand things like we do” betrays a lack of understanding of humans, not LLMs.

0. https://home.csulb.edu/~cwallis/382/readings/482/nisbett%20s...


A lot more people are credulous idiots than anyone wants to believe - and the confusion/misunderstanding is being actively propagated.


Seeing dissenting opinions as being “actively propagated” by “credulous idiots” sure makes it easy to remain steady in one’s beliefs, I suppose. Not a lot of room to learn, but no discomfort from uncertainty.


I think we have to be open to the possibility it's us not them, but I haven't been convinced yet


I think they just mean that GPT produced text that a human then makes a decision using (rather than "GPT making a decision")


I wish that was true.


Yeah, that's fair. I should have said something like "GPT generates a less biased description of a decision than an inexperienced manager", and that using that description as the basis of an actual decision likely leads to better outcomes.

I don't think there's much of a difference in practise though.


Think of all the human growth and satisfaction being lost to risk mitigation by offloading the pleasure of failure to Machines.


Ah, but machines can’t fail! So don’t worry, humans will still get to experience the ‘pleasure’. But won’t be able to learn/change anything.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: