Hacker News new | past | comments | ask | show | jobs | submit login

It's not necessarily an either-or. Your local LLM could offload hard problems to a service by encoding information about your request together with context and relevant information about you into a vector, send that off for analysis, then decode the vector locally to do stuff. It'd be like asking a friend when available.



Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: