The feature itself seems reasonable and useful. Specially if most of your tooling is written is Go as well. But this part caught my attention:
> user defined tools are compiled each time they are used
Why compile them each time they are used? Assuming you're compiling them from source, shouldn't they be compiled once, and then have the 'go tool' command reuse the same binaries? I don't see why it compiles them at the time you run the tool, rather than when you're installing dependencies. The benchmarks show a significant latency increase. The author also provided a different approach which doesn't seem to have any obvious downsides, besides not sharing dependency versions (which may or may not be a good thing - that's a separate discussion IMO).
Ok, I'm trying to suss out what this means, since `go tool` didn't even exist before 1.24.
The functionality of `go tool` seems to build on the existing `go run` and the latter already uses the same package compilation cache as `go build`. Subsequent invocations of `go run X` were notably faster than the first invocation long before 1.24. However, it seems that the final executable was never cached before, but now it will be as of 1.24. This benefits both `go run` and `go tool`.
However, this raises the question: do the times reported in the blog reflect the benefit of executable caching, or were they collected before that feature was implemented (and/or is it not working properly)?
The tl;dr is...:
1. The new caching impacts the new `go tool` and the existing `go run`.
2. It has a massive benefit.
3. `go tool` is a bit faster than `go run` due to skipping some module resolution phases.
4. Caching is still (relatively) slow for large packages
Awesome! It's interesting that there's still (what feels like) a lot of overhead from determining whether the thing is cached or not. Maybe that will be improved before the release later this month. I'm wondering too if that's down to go.mod parsing and/or go.sum validation.
I'd also note the distinction between `go run example.com/foo@latest` (which as you note must do some network calls to determine the latest version of example.com/foo) and simple `go run example.com/foo` (no @) which will just use the version of foo that's in go.mod -- presumably `go tool foo` is closer to the latter.
"Shared dependency state" was my very first thought when I heard about how it was built.
Yeah I want none of that. I'll stick with my makefiles and a dedicated "internal/tools" module. Tools routinely force upgrades that break other things, and allowing that is a feature.
same. tools are not part of the codebase, nor dependencies.
you got to have isolation of artefact and tools around to work with it.
it is bonkers to start versioning tools used to build project mixed with artefact dependencies itself. should we include version of VSCode used to type code? how about transitive dependencies of VSCode? how about OS itself to edit files? how about version of LLM model that generated some of this code? where does this stop?
The state of things with some projects I've touched is "We have a giant CI thing that does a bunch of things. It is really very large. It might or might not be broken, but we work around it and it's fine."
I think some of the euphoria around tracking tooling in the repo is "yes, now I can run a command in the repo and it's as if I spun up a docker container with the CI locally, but it's just regular software running on my machine!" This is a huge improvement if you're used to the minutes-or-hours-long CI cycle being your only interaction with the "real environment."
The reductio ad absurdum that you describe is basically "a snapshot of the definitions of all the container images in our CI pipeline." It's not a ridiculous example, it's how many large projects run.
I am with you on the same boat that this better be versioned and reproducible and standardised.
my key concern whether tools to build project have to be in the same pool as project itself (that may or may not use tools to build/edit/maintain/debug it).
It makes sense to some extent when the toolchain can tap into native language specific constructs when you need that REPL-like iteration loop to be tight and fast. But that kind of thing is probably only required in a small subset of the kind of tooling that gets implemented.
The tradeoff with this approach is that you lose any sort of agnosticism when you drop into the language specific tooling. So now if you work at a corporation and have to deal with multiple toolchains every engineer now needs to learn and work with new build tooling X times for each supported language. This always happens to some extent - there’s always going to be some things that use the language’s specific task runner constructs - but keeping that minimal is usually a good idea in this scenario.
Your complaint feels to me that it is about poorly implemented CI systems that heavily leverage container based workflows (of which there are many in the wild). If implemented properly with caching, really the main overhead you are paying in these types of setups is the virtualization overhead (on macs) and the cold start time for the engine. For most people and in most cases neither will make a significant difference in the wall clock time of their loop, comparatively.
This take absolute boggles the mind. You don't want people compiling your code with different versions of tools so you have to debug thousands of potential combinations of everything. You don't want people running different versions of formatters/linters that leave conflicting diffs throughout your commit history.
so where does it stop? let's include version of OS on laptop of people who edit code? it is getting ridiculous.
you got to draw a line somewhere.
in my opinion, "if dependency code is not linked nor compiled-into nor copied as a source (e.g. model weights, or other artefacts) then it must not be included into dependency tree of project source code"
that still means, you are free to track versions/hashes/etc. of tools and their dependencies. just do it separately.
Ideally it stops at the point where the tools actually affect your project.
Does everyone need to use the same IDE? Obviously not. Same C++ compiler? Ideally yes. (And yes you can do that in some cases, e.g. Bazel and its ilk allow you to vendor compilers.)
It's not that uncommon to have OS version fixed in CI/CD pipelines. E.g. a build process intended to produce artefacts for Apple's Appstore is dependent on XCode, and XCode maybe forced to upgrade in case of MacOS upgrade, and it may break things. So the OS version becomes a line in requirements. It's kinda disappointing, but it's the real state of affairs.
how about hardware that software is supposed to run on? that certainly can have effect. let's pin that too into project repo. don't want to continue this thread. but to re-iterate my point, you have to stop somewhere what to pin/version and what to not. I think my criteria is reasonable, but ultimately it is up to you. (so as long whole ecosystem and dependency trees do not become bags of everything, in which case Go here is alright so far. after digging deeper what v1.24 proposing will not cause massive dependency-apocalypsis of tools propagating everywhere and only what you actually use in main is going to be included in go.mod, thanks to module pruning).
yes let's have a meta project that can track the version of my tools "separately", and the version of my repo.
linters, formatters, reproducible codegen should be tracked. their output is deterministic and you want to enforce that in CI in case people forget. the rest doesn't really affect the code (well windows OS and their CRLF do but git has eol attributes to control that).
agree it has to be deterministic. my major concern was whether tools dependencies are mixed with actual software they are used to build. hopefully they are not mixed, so that you can have guarantees that everything you see in dependency tree is actually used. because otherwise, there is not much point in the tree (since you can't say if it is used or not).
again, v1.24 seems to be okay here. go mod pruning should keep nodes in tree clear from polluting each other.
No one says that this is a one-size-fits-all solution, but for some use cases (small tools that are intimately connected to the rest of the codebase, even reuse some internal code/libraries) it's probably helpful...
"Popular" and "good" have no relation to each other. They correlate fairly well, but that's all.
Blending those dependencies already causes somewhat frequent problems for library owners / for users of libraries that do this. Encouraging it is not what I would consider beneficial.
I wrote this blog detailing the specific steps that I used to deploy SPIRE on a Kubernetes cluster running off my Ubuntu laptop. It's a regular Kubernetes cluster created using kubeadm, running on 1 controller and two worker nodes. My purpose was to get a working SPIRE setup that could be used for trying out stuff that uses these identities.
This is a basic but usable implementation of the C++ ConcurrencyTS. The README lists what's supported and what's not. I think the when_any implementation is flaky.
While I have a personal favorite, I recognize this as one of those question which cannot have one right answer. The best language for the interview is the language in which the candidate is most proficient and expressive in. If you're making the candidate write a lot of algorithms, it helps to have a basic ds/algo library that's flexible and easy to use. C++ has STL, which is great but for people unfamiliar with its style (iterators) or syntactic peculiarities (templates, etc), things may not be very smooth. Java has a decent library too, and I guess so does Python but I'm less familiar.
Very dependent on your goals in life. I have heard great things about both companies. Nutanix does more cloud stuff. Cohesity is more backup, restore. But they have great tech stacks. Lot of C++ at both places, which maybe a plus or minus for you. Not much idea about company culture.
Key areas: Systems and application programming, 11+ years of experience. Worked on high availability and server management areas. Keen learner - highly proficient in C++ / C++11. Can work fluently in Java / Python / NodeJS. Contracted author for an upcoming title on C++ using Boost libraries.
Work permit status: US: Have an H1B petition filed and approved in 2012. Need new sponsor. Could not travel earlier due to a personal accident followed by offer revocation.
Seeking employers preferably in the Bay Area. Open to working in EU as well. Looking to build and use cloud-enabling technologies. Want to work in fast-moving, dynamic teams with freedom to choose tools and techniques and challenging problems to solve.
And didn't quite understand the euphoria.