Hacker News new | past | comments | ask | show | jobs | submit login

That would make a lot more sense. That way at least you have a chance to check up on the output, lest your first meal of 'Hedysarum alpinum' ends up being your last.



The key aspect to this being a good solution is actually building a corpus representing as much possible reference knowledge needed in scenario. The idea that the answer is Wikipedia way underestimates the scope.

The Wikipedia patch doesn’t make much sense to me.

What percent of the important questions being asked in this doomsday scenario actually have their answer in Wikipedia?

If 50% of the time you are left trusting raw LLaMa, then you don’t really have a decent solution.

I do appreciate the sentiment tho that future or finetuned LLMS might fit on an RPi or whatever, and be good enough.


I think the theory is that an LLM can integrate the knowledge of Wikipedia and become something greater than the sum of its parts by applying reasoning that is explained in one article to situations in other topics where the reasoning might not be so well explained. Then you can ask naive questions in new scenarios where you might not have the background knowledge (or simply the mental energy) to figure out a right answer to a situation on your own but it powers through for you. AFAIK current LLMs are not this abstract. If one type of reasoning is more often applied in one scenario and another type of reasoning is applied to another, they don't have any context beyond the words and they know what topics humans usually jump to from other given topics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: