The Vercel AI SDK abstracts against all LLMs, including locally running ones. It even handles file attachments well, which is something people are using more and more.
I found that [LangChain](https://www.langchain.com/langchain) has pretty good abstractions and better support for multiple LLMs. They also have a good ecosystem of supporting products - LangGraph and LangSmith. Currently supported languages are Python and Javascript.
My problem with LangChain (aside from dubious API choices, some a legacy of when it first started) is that now it's a marketing tool for LangGraph Platform and LangSmith.
Their docs (incl. getting started tutorials) are content marketing for the platform services, needlessly pushing new users into more complex and possibly unnecessary direction in the name of user acquisition.
(I have the same beef with NextJs/Vercel and MongoDB).
Some time ago I built a rather thin wrapper for LLMs (multi provider incl local, templates, tools, rag, etc...) for myself. Sensible API, small so it's easy to maintain, and as a bonus, no platform marketing shoved in my face.
I keep an eye on what the LangChain ecosystem's doing, and so far the benefits ain't really there (for me, YMMV).
I agree that the LangChain docs and examples shouldn’t rely on their platform commercial products. Then the LangGraph and LangSmith documentation should layer in top of the LangChain docs.
I would have thought this was impossible, I contribute to llama.cpp and there's an awful lot of per-model ugliness to make things work, even just in terms of "get it the tool calls in the form it expects."
cries at the Phi-4 PR in the other window that I'm still working on, and discovering new things, 4 weeks later
I’d guess it supports a small set of popular HTTP APIs (particularly, its very common for self-hosting LLM toolkits—not the per-model reference implementations or low-level console frontends—to present an OpenAI compatible API), so you could support a very wide range of local models, through a variety of toolkits, just by supporting the OpenAI API with configurable endpoint addresses.
At some point someone tried a toy direct integration, but the only actually supported way is via a Python library that wraps llama.cpp in an OpenAI compatible API endpoint.
Just in case it's helpful to anyone, I recently spoke to a very respected consultant about this, and he also recommended Vercel's AI SDK. We haven't tried it yet but plan to.
> How is this allowed/possible in npm? Don't they have mandatory namespaces?
No, scopes (i.e., namespaces) aren’t mandatory for public packages on npm. If a name is available, it’s yours. Sometimes, some folks have published packages that are either empty or not used, so you can reach out to them to ask them to pass it on to you. At least a few times, I’ve had someone reach out to me asking for a package I had published years ago that I did nothing with and I passed it on to them.
Also sometimes, you can go directly to npm asking for an existing name, they can give it to you and the previous owner can be pissed and remove his other packages that are dependency of basically everything, breaking the building scripts of the whole internet.
https://sdk.vercel.ai/docs/introduction
It uses zod for types and validation, I've loved using it to make my apps swap between models easily.