Hacker News new | past | comments | ask | show | jobs | submit login

I’d prefer it be an OS API.

You link your os to a local or cloud llm, and a local program asking the OS for a response and can’t even tell which one you’re using or whether it’s on the machine or not. It should all be abstracted away.






There's a number of standard APIs already, OpenAI supports Anthopic's MCP, LM studio supports both their proprietary API as well as OpenAI's API. OpenAI has open sourced their realtime API (https://github.com/openai/openai-realtime-console/tree/webso...) and others. Most local clients just have a https://URL:port and then a drop down box for which RESTful API you want to use (for 88% of use cases they all support the same stuff, for realtime it's not quite settled yet), plus a field for an API key if needed.

To me, the value of these types of projects is specifically that they are self-contained and local-only. That's the only kind of interaction with it I'm comfortable with right now. I mostly jumped ship on commercial software a long time ago, so I'm hoping there will still be some AI-free linux distros for a good long time. Different strokes for different folks, I suppose. At the point that the type of AI integration you're imagining becomes ubiquitous and mandatory, I may or may not stop working with computers entirely, depending on the state of the tech and the state of society by then.



Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: