Lots and lots of developers can't program at all. As in literally - can't write a simple function like "fizzbuzz" even if you let them use reference documentation. Many don't even know what a "function" even is.
(Yes, these are people with developer jobs, often at "serious" companies.)
This is half the point of interviewing. I've been at places that just skip interviewing is the person comes highly recommended, has a great CV, or whatever.
Predictably they end up with some people on the range from "can't code at all" to "newbie coder without talent"
I've never met someone like that and don't believe the claim.
Maybe you mean people who are bad at interviews? Or people whose job isn't actually programming? Or maybe "lots" means "at least one"? Or maybe they can strictly speaking do fizzbuzz, but are "in any case bad programmers"? If your claim is true, what do these people do all day (or, let's say, did before LLMs were a thing...)?
I've definitely worked with a person who struggled to write if statements (let alone anything more complex). This was just one guy, so I wouldn't say "lots and lots" like the other poster did, but they do exist.
Yeah I’ve been doing this for a while now and I’ve never met an employed developer who didn’t know what a function is or couldn’t write a basic program.
I’ve met some really terrible programmers, and some programmers who freeze during interviews.
By "lots" I estimate about 40 percent of the software developer workforce. (Not a scientific estimate.)
> Maybe you mean people who are bad at interviews?
No, the opposite. These developers learn the relevant buzzwords and can string them together convincingly, but fail to actually understand what they're regurgitating. (Very similar to an LLM, actually.)
E.g., these people will throw words like "Dunder method" around with great confidence, but then will completely melt down for fifteen minutes if a function argument has the same name as a module.
When on the job these people just copy-paste existing code from the "serious company" monorepo all day, every day. They call it "teamwork".
You can (theoretically) constrain LLM output with a formal grammar. This works on the next token selection step and not just another prompt hack. You could also (theoretically) have a standard way to prompt an LLM API with formal grammar constraints.
That would be a very useful feature.
MCP is not that, MCP is completely unnecessary bullshit.
The first part of this comment is great, but can you please avoid name-calling in the sense that the HN guidelines use the term?
"When disagreeing, please reply to the argument instead of calling names. 'That is idiotic; 1 + 1 is 2, not 3' can be shortened to '1 + 1 is 2, not 3." - https://news.ycombinator.com/newsguidelines.html
Your comment would be just fine without that last bit—and better still if you had replaced it with a good explanation of how MCP is not that.
Can you elaborate a bit more on "theoretical" formal grammars and constraints that would allow the LLM to use a search engine or git commands and produce the next tokens that take the results into account?
Here are some practical, non-theoretical projects based on a boring and imperfect standard (MCP) that provide LLMs capabilities to use many tools and APIs in the right situation: https://github.com/modelcontextprotocol/servers
An LLM doesn't "use" anything. Your agent does that. The agent is the program that prompts the LLM and reads the response. (The agent typically runs locally on your device, the LLM is typically on a server somewhere and accessed through an API.)
For the agent to parse an LLM's reply properly you'd ideally want the LLM to give a response that adheres to a standard format. Hence grammars.
I'm guessing your confusion stems from the fact that you've only ever used LLM's in a chat box on a website. This is OpenAI's business model, but not how LLM's will be used when the technology eventually matures.
Do you build agents that interface with web browsers? BLAST is sort of like vllm for browser+LLM. The motivation for this is that browser+LLM is slow and we can do a lot of optimization with an engine that manages browser+LLM together - e.g. prefix caching, auto-parallelism, data parallelism, request hedging, scheduling policy, and more coming soon.
Now the API is what may be throwing folks off. Right now it's an OpenAI-compatible API. We will implement MCP. But really the core thing is abstracting away optimizations required to efficiently run browser+LLM.
A system that does the following given a task_description:
while LLM("is <task_description> not done?"):
Browser.run(LLM("what should the browser do next given <Browser.get_state()>"))
This simple loop turns out to be very powerful, achieving the highest performance on some of OpenAI's latest benchmarks. But it's also heavily unoptimized compared to a system that is just LLM("<task_description>") for which we already have things like vllm. BLAST is a first step towards optimizing this while loop.
For me, NodeRED is far more low-level with switch nodes being the equivalent to a case statement. A change node being equivalent to doing assigments of variables.
n8n is far more high level with google sheet nodes communicating with postgres database nodes. There is far less ability to do manipulate the data being passed around - as many said Zapier-like.
NodeRED is used for home automation and talking to devices that are connected to the network and providing nice dashboards of things happening. Another big use case is IIoT. So it less focussed on integration of SaaS services and more on devices integration and inter-communication between devices.
Plus NodeRED has a great collection[1] of third party nodes that can help in connecting to new devices. Installing nodes is based on npm but is completely automated.
¯\_(ツ)_/¯
reply