We had the same setup, ex-Googler SRE demanded builds be done with Bazel for our Go monorepo. He convinced someone to switch it all over, all builds, deploys, CI, testing, etc were all dependent on Bazel.
That person left, and the Bazel stuff was left unmaintained as no one had the interest nor time to learn it.
Later Bazel decide to kill off rules_docker and replace it with rules_oci, which means we can no longer update our Golang version without a super painful migration where we end up breaking production a bunch of times because of quirks in the super difficult migration.
Eventually we invested the time to rip the whole thing out and replace it with standard Go tooling with a multistage docker build for the container image. Everything from running tests, using Golang tooling, legibility of builds, CI, and deploys is easier.
The best thing we did was remove it and move to standard Golang tooling.
Personally, most cases I have seen are caused by a systematic under-investment into developer toolings and infrastructure at the company. Management layers often don't understand the software development assembly lines are not composed of just workers(software engineers), but also tools and machines that enable faster workflows. This often results in some critical processes in the pipeline from code to prod to be maintained by 1 guy: monitoring, alerting, deployments, builds, git etc... and when that 1 guy left, the system failed and the company suffered.
I think successful Bazel adoption in an org is often a signal that the company has grown in size and values its developer's time and happiness. Failure to adopt often means a lack of investment in dev experience in general.
I worked in a huge big tech with infinite resources. They say monorepo using bazel and gazelle is a success there, I personally found it the worst dev experience ever... Everything was so slow, IDE doesn't work properly, there was no easy debugger, intellisence, refactoring, Code generation... Now every enterprise which calls me on LinkedIn, I ask if there is usage of monorepo, if it has, I just answer that I am not interested...
I agree in principle, although I’m not sure in our scenario that Bazel was even a value add when it was maintained. It just meant our Golang developers had to use a bunch of obtuse tooling like Gazelle instead of the normal Go tooling they’re familiar with already. It meant a bunch of smart people ended up blocked and having to constantly ask questions in Slack channels to other devs when they otherwise wouldn’t have been. Some teams even avoided joining our monorepo and built out their own CI, builds, etc in another repo solely because we were using Bazel in the monorepo.
The reality is, in a post-layoffs world, practically every company is scrimping on these sorts of “back office” things. If people couldn’t even get these complex build systems to work in the pre-layoffs low-interest-rate tech world where labour was more abundant, how can they get it done and maintained now?
The builds were much faster afterwards, but Go builds always are fast. Maybe in other languages that are slower to build the Bazel cache is more beneficial.
That person left, and the Bazel stuff was left unmaintained as no one had the interest nor time to learn it.
Later Bazel decide to kill off rules_docker and replace it with rules_oci, which means we can no longer update our Golang version without a super painful migration where we end up breaking production a bunch of times because of quirks in the super difficult migration.
Eventually we invested the time to rip the whole thing out and replace it with standard Go tooling with a multistage docker build for the container image. Everything from running tests, using Golang tooling, legibility of builds, CI, and deploys is easier.
The best thing we did was remove it and move to standard Golang tooling.