Hacker News new | past | comments | ask | show | jobs | submit login

While I find value in LLMs they still overall seem unreasonably not that useful.

It might be like trying to train a neural net in 1993 on a 60mhz Pentium. It is the right idea but fundamental parts of the system are so lacking.

On the other hand, I worry we have gone down the support vector machine path again. A huge amount of brain power spent on a somewhat dead end that just fits the current hardware better than what we will actually use in the long run.

The big difference though from SVM is this has captured the popular imagination and if the tide goes out, the AI winter will the most brutal winter by an order of magnitude.

AGI or bust.




I’d say the biggest difference between LLMs and SVMs is that a lot of people find LLMs useful on a daily basis.

I’ve been using them almost daily for over two years now, and I keep on finding new things they can do that are useful to me.


They’re useful, but not for what AI companies seem to be pushing for.

I like that they can reorganize my data, document QA is pretty killer as long as the document was prepared well.

Embeddings are sick.

But content creation… not useful. Problem solving? Personally have not found them useful (haven’t tried o1 yet)


Is there a post on your blog that lists your different uses of LLMs?


Not in a single place, but it came up in a podcast episode the other day - about 32 minutes in to this one I think https://softwaremisadventures.com/p/simon-willison-llm-weird...




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: