Hacker News new | past | comments | ask | show | jobs | submit login

* I was hoping for that this year, but: split of LLMs into databases and reasoning. Think small, capable PHI with pluggable information. Currently we're burning a lot of energy both to learn and process everything at once. The next step will be something closer to RAG, potentially selecting databases to load depending on topic, like a librarian. This will both enable more client-side applications and save lots of money for providers.

* Some AI provider seriously looking at / funding RWVK?

* (More) Healthcare issues in the US, spilling into other countries. (or just the beginning of them anyway - effects will last much longer and life expectancy will decline)

* Some companies seriously looking at AI as another manager / decision maker. Quietly, not as a publicity stunt like it's done now.

* Google search market share falling further. Maybe 85%, down from the current 90%. (more of a wish than a prediction)

* US policies / ideas around cryptocurrency will be wildly incoherent, causing big swings every month

* Consumer RISC-V laptops (again, wishlist)




> Consumer RISC-V laptops (again, wishlist)

Depending on how one defines "consumer," that's already available: https://liliputing.com/risc-v-laptops-299-muse-book-and-399-... and https://frame.work/products/deep-computing-risc-v-mainboard was announced earlier but seems it is, as you implied, still "mailing-list ware"

But, for my curiosity: if you bought one of those what would you run on it? I see that Starfive claims they have Chromium[1] and there's reports of Firefox[2] so maybe in this "the browser is the OS" world that'd be sufficient

1: https://github.com/starfive-tech/chromium.src/wiki/Chromium-...

2: https://lists.riscv.org/g/apps-tools-software/topic/now_we_h...


I'd just do all normal development on it. Nice power-efficient machines which push the multi architecture apps and aren't held back by Qualcomm hardware support would be amazing.


>I was hoping for that this year, but: split of LLMs into databases and reasoning

This hasn't worked out because knowing what to query and how to query requires intelligence, and that's contained in weights.


Splitting knowledge out doesn't mean you need to drop weights. Just that it needs to become independent / attachable.

To some extent we've already seen this with MoE and Frankenstein models.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: