the fact that it's been rebranded already within an year of launch and is so poorly marketed should tell you something about the future of this project
Given how fast dev machines are, doing this might not be a bad idea for small teams _if_ you have some way to ensure reproducible builds/tests. Otherwise I can imagine a lot of "works on my machine" cases
Though if you are an org of 1000+ engineers working in big monorepos better stick to CI
You can’t put an API on everything because it’d take a ton of time and money to pull that off.
I can’t think of any technological reasons why every digital system can’t have an API (barring security concerns, as those would need to be case by case)
So instead, we put 100s of billions of dollars into statistical models hoping they could do it for us.
A web page is an Application/Human Interface. Outside of security concerns, companies can make more money if they control the Application/Human Interface, and reduce the risk of a middleman / broker extorting them for margins.
If I run a flight aggregator that has a majority of flight bookings, I can start charging 'rents' by allowing featured/sponsored listings to be promoted above the 'best' result, leading to a prisoner's dilemma where airlines should pay up to their margins to keep market share.
If an AI company becomes the default application human interface, they can do the same thing. Pay OpenAI tribute or be ended as a going concern.
I would lump dagger in with JS (programming language, ecosystem, runtime) and MacOS (operating system) before comparing it to any single tool.
Answering "what does dagger do" can be tough, because there are very broad applications.
Its always hard to describe general purpose platforms like these. Dagger is an open platform for building composable software. It has SDKs, sandboxed execution environment, observability, and now this shell interface.
But when later does come, it can take dev-years to fully disentangle the global state and allow code reuse. Did you gain dev-years in productivity by using it in the first place? Probably not.
If you have good reason to believe that an app will stick around for more than a year, be maintained by more than 3 people, or grow to more than 500k lines of code (sub in whatever metrics make sense to you), don't put off removing global state for later. You will regret it eventually, and it doesn't cost much to do it right the first time.
(Also, no mainstream language I'm aware of forces you to not use global state. Even Java, famed for its rigidity, has global state readily available if you really do need it.)
Global state is a tool that will almost always lead to bad architecture in an app where architecture matters. I'm sure you can point to a counterexample or two where a set of devs managed to keep disciplined indefinitely, but that doesn't change the fact that allowing people to reach into a mutable variable from anywhere in the system enables trivially accessible spooky action at a distance, and spooky action at a distance is a recipe for disaster in a medium to large code base.
In a project with more than a few people on it, your architecture will decay if it can decay. Avoiding global state removes one major source of potential decay.
“Almost” is key there. I respect your position, but it’s an always/never take, and the longer I am in this industry, the more I find myself leaning into “it depends.” Here’s a take that articulates this being done well on a large codebase better than I can in a short comment: https://dev.37signals.com/globals-callbacks-and-other-sacril...
No, it isn't—I'm the one who inserted the word "almost" into that sentence! Where did you get the idea that I meant always/never?
Like I said, you can point to exceptions but that doesn't change the rule. It's better to teach the rule and break it when you really know what you're doing—when you understand that you're breaking a rule and can articulate why you need to and why it's okay this time—than it is to spread the idea that globals are really just fine and you need to weigh the trade-offs. The odds are strongly against you being the exception, and you should act accordingly, not treat globals as just another tool.
Sometimes amputation is the right move to save someone's life, but you certainly should not default to that for every papercut. It's a tool that comes out in extreme circumstances only when a surgeon can thoroughly justify it.
Respectfully, your response further qualifies what I meant by your take being an always/never. I’m aware you’re the one who put “almost” in there, and I didn’t meant to imply you were being stubborn with that take, that’s why I said (and genuinely meant) that I respect it.
But I’m also aware that you’re comparing using global state to amputating a human limb. I don’t think it’s nearly that extreme. I certainly wouldn’t say global state “almost always leads to bad architecture,” as evidenced by my aligning with a framework which has a whole construct for globals baked into it (Rails’ Current singleton) that I happen to enjoy using.
Sure, global state is a sharp knife, which I already said. It can inflict pain, but it’s also a very useful tool in certain scenarios (more than would equate to “almost [never]” IMO).
So your response aligns with how I took your original post, and what I inferred “almost” really meant: basically never. My point is that I don’t agree with your take being a “rule.” While I understand your perspective, instead of saying basically never, I would say, “it depends.”
Sorry, I like Ruby, but this is nonsense. Rails apps get enormous very quickly, like apps written in every other framework. In most work you can't just declare that your world will be small, your world is as big as your problem is.
Happiness of a developer writing code can be a misery of a one having to read / debug it.
I worked in ruby for a couple years around 2009 and having to deal with a code that implemented most of its logic via method missing is still one of the strongest negative memories I have about coding.
`binding.irb` and `show_source` have been magical in my Ruby debugging experience. `binding.irb` to trigger a breakpoint, and `show_source` will find the source code for a method name, even in generated code somehow.
Another annoying one from that category is Ruby's forwarded methods. Since they're created via generated, injected code, you can't query which method it forwards to at runtime. Or not easily anyway.
What I advise (and aim for) is only pulling out the sharp knives for "library" code, but application code should stay "simple" (and this much more easily navigable). Otherwise you can absolutely make a bloody mess!
Every language prioritizes something (or somethings) because every language was made by a person (or people) with a reason; python and correctness; Java and splitting up work; Go and something like "simplicity" (not that these are the only priorities for each language). As another comment points out, Matz prioritized developer happiness.
My favorite example of this is the amazing useful and amazing whack Ruby array arithmetic; subtraction (`arr1 - arr2`) is element-wise removal, but addition (`arr1 + arr2`) is a simple append. These are almost always exactly what you want to do when you reach for them, but they're completely "incorrect" mathematically.
I'd say they both optimize for DX, but they come at it from very different angles. Ruby is focused on actually writing the code: making it feel expressive, intuitive, and fun.
Go is more about making it easier to build fast and robust systems. But it really doesn't care if the code itself is ugly and full of boilerplate.
As I've gotten more experience, I've come to really appreciate Go's tradeoffs. It's not as fun up front, but on the other hand, you're less likely to get server alerts at 4am. It really depends what you're building though.
Go ecosystem is generally good. However, given that Go as a language doesn't have any "fancy" (for the lack of a better word) syntactical features you can't create DSL's like this
though Ruby's expressiveness comes at a cost and I'd personally stick with Go in a team but use something like RubyLLM for personal projects
I'm wary of equating "ability to create dsls like this" with "prioritizing developer experience".
Ruby and go each prioritize different parts of the developer experience. Ruby prioritizes the experience of the author of the initial code at the expense of the experience of the maintainer who comes later. Go prioritizes the experience of a maintainer over the experience of the initial author. Both prioritize a developer, both de-prioritize a different developer, and which one makes more sense really depends on the ratio of time spent on writing greenfield code versus maintaining something another human wrote years ago who's long gone.
I disagree with the idea that Go prioritizes the maintainer. More lines of code typically makes maintenance more difficult. Go is easy to read line by line, but the verbosity makes it more challenging to understand the bigger picture.
I find changes in existing Go software often end up spreading far deeper into the app than you'd expect.
The runtime is fantastic, though, so I don't see it losing it's popularity anytime soon.
> More lines of code typically makes maintenance more difficult.
That’s kind of just the surface level of maintenance though. Go is not so much focused on making it easy to read a single file, but on minimizing the chains of abstraction and indirection you need to follow to understand exactly how things work.
It’s much more likely that all the logic and config to do something is right there in that file, or else just one or two “Go to definition” clicks away. You end up with way more boilerplate and repetition, but also looser coupling between files, functions, and components.
Contrast that to a beautiful DSL in Ruby. It’s lovely until it breaks or you need to extend it, and you realize that a small change will require refactoring call sites across a dozen different files. Oh and now this other thing that reused that logic is broken, and we’ve got to update most of the test suite, and so on.
> or else just one or two “Go to definition” clicks away
This is the biggest part of it: maintainers need static analysis and/or (preferably and) very good grepability to help them navigate foreign code. Ruby by its nature makes static analysis essentially impossible to do consistently, whereas Go leans to the opposite extreme.
I've found for years that ctrl-b works pretty damn well in Rubymine as well as Goland. Huge advantage vs other editors 5+ years ago. I wonder if the difference is still as stark in the age of lsps?
Surely you can have semantically the same API in Go:
// Must[T](T, error) T is necessary because of Go error handling differences
chat := Must(gollm.Chat().WithModel("claude-3-7-sonnet-20250219"))
resp := Must(chat.Ask("What's the difference between an unexported and an exported struct field?"))
resp = Must(chat.Ask("Could you give me an example?"))
resp = Must(chat.Ask("Tell me a story about a Go programmer"))
for chunk := range resp { // Requires Go 1.23+ for iterators
fmt.Print(chunk.Content)
}
resp = Must(chat.WithImages("diagram1.png", "diagram2.png").Ask("Compare these diagrams"))
type Search struct {
Query string `description:"The search query" required:"true"`
Limit int `description:"Max results" default:"5"`
}
func (s Search) Execute() ([]string, error) { ... }
resp = Must(chat.WithTool[Search]().Ask("Find documents about Go 1.23 features"))
And so on. Syntax is different, of course, but semantics (save for language-specific nuances, like error handling and lack of optional arguments) are approximately the same, biggest difference being WithSomething() having to precede Ask()
I was an early contributor to Langchain and it was great at first - keep in mind, that's before chat models even existed, not to mention tools, JSON mode, etc.
Langchain really, I think, pushed the LLM makers forward toward adding those features but unfortunately it got left in the dust and became somewhat of a zombie. Simultaneously, the foundational LLM providers kept adding things to turn them more into a walled garden, where you no longer needed to connect multiple things (like scraping websites with one tool, feeding that into the LLM, then storing in a vector datastore - now that's all built in).
I think Langchain has tried to pivot (more than once perhaps) but had they not taken investor $$ early on (and good for them) I suspect that it would have just dried up and the core team would have gone on to work at OpenAI, Anthropic, etc.
langchain and llamaindex are such garbage libraries: not only they never document half of the features they have, but they keep breaking their APIs from one version to the next.
I was about to mention those. I decided a while ago to build everything myself instead of relying on these libraries. We could use a PythonLLM over here because it seems like nobody cares about developer experience in the Python space.
Thank you! This is what the Ruby community has always prioritized - developer experience. Making complex things simple and joyful to use isn't just aesthetic preference, it's practical engineering. When your interface matches how developers think about the problem ___domain, you get fewer bugs and more productivity.
> can you make it detect the device somehow, maybe with some additional permissions, instead of user selecting from a dropdown?
Detecting CPU and GPU specs browser-side is almost impossible to do reliably (even if relying on advanced fingerprinting and certain heuristics).
For GPU’s, it may be possible to use (1) WebGL’s `WEBGL_debug_renderer_info` extension [0][0] or (2) WebGPU’s `GPUAdapter#info` [1][1], but I wouldn’t trust either of those API’s for general usage.
Jay you seem knowledgeable on this - thanks for answering - I have a question
I did look at auto-detecting before, but it seems like you can only really tell the features of a GPU, not so much the good info (VRAM amount and bus speed) - is that the case?
I looked at the GPUAdapter docs, and all it told me was:
- device maker (amd)
- architecture (rdna-3)
and that was it. Is there a way to poke for bus speed and vram amount?
> Is there a way to poke for bus speed and vram amount?
Unfortunately, I’m not aware of any way to reliably get this type of information browser-side.
If you really want to see how far you can get, your best bet would be via fingerprinting, which would require some upfront work using a combination of the manual input you already have now and running some stuff in the background to collect data (especially timing-related data). With enough users manually inputting their specs and enough independently collected data, you’d probably be surprised at how accurate you can get via fingerprinting.
That said, please do NOT go the fingerprinting route, because (1) a lot of users (including myself) hate being fingerprinted (especially if done covertly), (2) you need quite a lot of data before you can do anything useful, and (3) it’s obviously not worth the effort for what you’re building.