Hacker News new | past | comments | ask | show | jobs | submit login

The reluctancy to introduce new syntax too quickly (looking at you, TC39 and Babel) makes go an almost maintenance free language.

If you learned idiomatic go, you can maintain and patch other libraries in the ecosystem very quickly.

Unified codestyle, unified paradigms, unified toolchain.

It's a language with harsh opinions on everything. If you manage to get over your own opinions, you'll realize that any opinion upstream is better than no opinion downstream.

That's why go's toolchain isn't as messed up as npm, yarn, grunt, gulp, webpack, parcel, babel and other parts of the ecosystem that have no conventions and are therefore as a result very expensive to maintain.




This!

In Go, a feature needs to have an extremely good reason to be added, and even then it's only added with care and caution.

In other languages, pointless features are added because maintainers are afraid, or not empowered to say... "yeah, we get it, but no, we will not add cruft to the language because you can't write your own 2 line function to achieve the same, no matter how hard you dunk on us on Twitter, it's take it or leave it, but you won't tell us how to run this project"


> pointless features are added because maintainers are afraid

I wouldn't have described language designers' feelings that way, but you're absolutely right. For example, witness the recent features added to Python with little more justification than "other languages have it". It's pure FOMO - fear of missing out.


Would you expand on the Python issue? I find recent Python additions either useful or non-intrusive, I wonder which ones you think are born out of FOMO.


The `match` statement is the most obvious - its motivation [0] is "pattern matching syntax is found in many languages" and "[it will] enable Python users to write cleaner, more readable code for [`if isinstance`]". But `if isinstance` is bad Python! [1]

`match` also breaks fundamental Python principles [2] and interacts badly with the language's lack of block scope:

    >>> a, b = 1, 2
    >>> match a:
    ...     case b: pass
    ...
    >>> a, b
    (1, 1)
Not to mention that it also required large changes to the CPython implementation, including an entirely new parser(!) - which means other implementations may never support it [3]. Clearly `match` doesn't fill a gap in a coherent design for Python. It seems to have been added due to a combination of FOMO and/or resume-driven development.

Another example is async-await - while the concept is fine (although I think stackful coroutines are a better fit for a high-level language), the syntax is just copy-pasted from other languages [4]. There seems to have been little thought as to why C# etc chose that syntax (to allow `async` and `await` to be contextual keywords), nor how `async def` contradicts existing Python syntax for generators.

[0] https://peps.python.org/pep-0635/#motivation

[1] http://canonical.org/%7Ekragen/isinstance/

[2] https://x.com/brandon_rhodes/status/1360226108399099909

[3] https://github.com/micropython/micropython/issues/8507

[4] https://peps.python.org/pep-0492/#why-async-and-await-keywor...


Here's Guido himself expressing FOMO about what would happen if Python stopped continually accumulating new features (specifically including the `match` statement):

> Essentially the language would stop evolving. I worry that that would make Python become the next legacy language rather than the language that everyone wants to use.

https://discuss.python.org/t/pep-8012-frequently-asked-quest...


Another example from another language: `class` added to JavaScript. Addition that was made to the language for something that already was possible, didn't add anything fundamentally new and added just because developers from other languages were more used to that particular syntax.


"Other languages have it" is a disease that's struck many languages in the past decade+, notably Javascript which added partial OOP support (classes but initially no access modifiers), or Java which added functional programming constructs via Streams.

I mean granted, Java needed some tweaks for developer ergonomics, and I'm glad they finally introduced value types for example, but I now find that adding entire paradigms to a language is a bad idea.

In the case of Go, yes it needed generics, but in practice people don't use generics that often, so thankfully it won't affect most people's codebases that much. But there's people advocating for adding more functional programming paradigms and syntax like a function shorthand, which is really frowned upon by others, because new syntax and paradigms adding to the things you need to know and understand when reading Go.

Plus at the moment / the way the language is designed, FP constructs do not have mechanical sympathy and are far slower than their iterative counterparts.


Yes about the language but also Google understands that both tooling and standard library is more important than the language. All of that makes Google internal maintenance much much better. Another artifact of google3 was the absolute shitshow of GOPATH and abysmal dependency management – because Google didn’t really need it, while people outside suffered. When they added go mod support Go became 10x more productive outside of Google.

Go is almost an anti-language in that sense, reluctantly accepting shiny lang features only after they’ve been proven to address major pain points. It’s almost more an “infrastructure deployment toolkit” than a language. Which strangely makes it extremely pleasurable to work with, at least for network-centric applications.


"That's why go's toolchain isn't as messed up as npm, yarn, grunt, ..."

And let's be honest, Rust toolchain is pretty messed up too.

Want to cross-compile with go? Set the GOOS variable and you are done. On Rust you need to curl-sh rustup, switch to nightly, add new targets, add target to your cargo and cross fingers it works this week.


To be fair, with Go it's still painful to work with private repositories. And I'm invoking Cunningham's Law here when I say there's no convenient way to do so.

If you want to use a private repository (let's say it's a private GitHub repository), then you either have to do it the bad way (setting up your own GOPROXY and setting it up securely, which implies also ensuring it's not leaking your source code elsewhere for "analytics purposes"), or the worse way (doing brittle text replacement using weird git config stuff).

Or the annoying way of using a vanity import, and host that package in your ___domain with HTTPS using a wildcard certificate. But that would require either (1) only allowing access through WireGuard and hoping whatever reverse proxy you use has a plugin for your DNS registry; or (2) letting your VPS provider terminate DNS (e.g. Hetzner load balancer), but filter by IP address in your VPS firewall settings ensuring your public address (IPV4 /32 or IPV6 /64) is always up-to-date in the firewall.

Or using `replace` in `go.mod`, but these don't work transitively so these only work on "root" projects (i.e. they are ignored in dependencies), so I don't think this really counts as a solution.

I would have liked some way to force SSH access for a specific package, instead of HTTPS. Like for example `go.mod` supporting a `require-private` to use instead of `require` (or whatever similarly convenient directive for `go.mod` that implies authenticated SSH access is required).

Or in other words, say I have a package `github.com/_company/project`, and it depends on `github.com/_company/dependency`. I want to be able to do:

    git clone '[email protected]:_company/project.git' 'project'
    cd 'project'
    go mod tidy  # Should just work.
`go mod tidy` should just work without complaining about `github.com/_company/dependency` not existing (because it's a private repository only accessible through SSH).

(EDIT: Still, I'm agreeing with the point that Go's tooling is better than most other things I've tried. My only complains are this one about being inconvenient to use private repositories, and also that by default it leaks package names to Google whenever you do `go mod tidy`.)


1) git config --global url.ssh://[email protected]/.insteadOf https://github.com/

2) export GOPRIVATE='github.com/_company/*'


>> or the worse way (doing brittle text replacement using weird git config stuff).


Rust cross-compilation isn't great, but this seems a bit hyperbolic:

> On Rust you need to curl-sh rustup

Yes, but you do that once per computer, probably when you installed the compiler

> switch to nightly,

No

> add new targets,

Fair. But this is also one-time setup.

>add target to your cargo

Not sure what you're talking about here tbh

> and cross fingers it works this week.

Don't use the nightly compiler and you're good


If you are doing embedded work, which is where you often do cross-compiling, you still need nightly.

But that's a whole different can of worms.


Still 100x better than cross-compiling C


With it being so consistent and predictable, I wonder why it hasn’t displaced .NET and Java in the enterprise for back end development.

Maybe because a framework like ASP.NET or Spring that covers like 80+% of the enterprise needs hasn’t quite emerged? Or perhaps we just need to give it a decade or so.

There are still very few Go jobs in my country, most are either .NET, Java or PHP.


Go doesn't really lend itself that much into code with a lot of abstraction layers. If you try to do that, you will start to run against the language.

The more enterprisey software usually have lots of layers for organizational reasons, and go doesn't really fit there. So I don't think it will really be a hit in the enterprise.

yes there are orm solutions and DI frameworks and all that, but they always feel like they don't belong in the language.

(Also, java and .net and php are much older and have much bigger enterprisey ecosystem.)

I have seen go replacing the php stack though, and in some way the "original" python stack - but python now has ML.


> usually have lots of layers for organizational reasons

The ridiculous number of layers in Java or C# are more of a skill and guidance issue than anything else. Older languages don’t always mean over-attempted-abstraction (think C, for example).


I have worked in enterprises and the layers are usually an organizational issue that just acts as code issue.

Structure of code often reflects structure of your company.

The code is often maze of layers because the company is a maze of sub-committees. Java/C# fits more neatly in there.

Although with Go, what can happen is that there is a maze of microservices. Maybe that's not that much better.


Never seen CORBA and COM written in C, I guess.

Enterprise Architects will do their beloved architectures with whatever languages are the tool of the day.


Ohhhh boy have I seen that. Like I say, it’s a skill issue, but those people tend to be in Java and C#, not Go and Rust.


Go has Kubernetes ecosystem for that, and Rust is yet to take off as application programming language embraced by Enterprise Architects at big corp.


The Kubernetes code base isn’t great, but the notion that the ecosystem is even nearly as complex as the average EA monstrosity implementing basic forms over data as multi-player Microsoft Access is fanciful at best.


> ASP.NET or Spring

(Disclaimer: I don't agree with HTMX rendering HTML chunks on the backend side, and I think that APIs should be HTTP-cacheable and use REST/JSON)

Currently I am trying to get better at using Go with WebAssembly for the frontend. I love to use the webview/webview bindings to build local GUIs, but the need for redundant code in JavaScript for client data transfers and input data validation are annoying me a bit much.

I am trying to find out a paradigm that could benefit from the strength of JSON marshalling, with the idea that you can map routes to structs, for example, and where a unified "Validate() (bool, error)" method as an interface is enough to use the same structs on both the frontend and backend, for both serialization and validation/sanitization.

Having said that, I think that what's missing the most in go right now is a good UI framework for the web, but it's hard to find a paradigm that fits nicely into the language while also being able to handle dynamic data/refreshes/render loops without getting too bloated too quickly.


> and I think that APIs should be HTTP-cacheable

Rendering html on the server does not make it not cacheable.

The vast majority of people do not need graphql or shift their compute to the client with a junk react app with adhoc json endpoints everywhere.


I think server side rendering can often be pretty comfortable to use!

That said, I’ve also worked with JSF and PrimeFaces project and the ones I’ve seen have been more burdensome from a maintenance perspective and also more buggy than most SPAs that I’ve worked with.

I’ve also seen large monoliths where the tight coupling slows updates down a whole bunch, as opposed to the presentation being in a SPA that’s separate from an API that can be more limited in its scope (nothing wrong with using ASP.NET and Spring for just an API).

Heck, I’ve migrated a SPA from the now defunct AngularJS to Vue and it felt better than having a legacy project that’s stuck on an old version of Spring and PrimeFaces because there’s so many moving parts that when you try to update anything, everything breaks.

Plus, turning a SPA into a PWA is a fairly pleasant experience. I don’t think most folks need GraphQL though, especially when we can barely do RESTful API correctly and not even all places use OpenAPI specs and tooling, making client code impossible to generate (even SOAP did a better job in that particular regard with how widespread WSDL was).


Go is displacing .NET where I work for backend development. We still use .NET for Windows GUIs but any sort of server code is mostly Go going forward.


In my enterprise neck of the woods Go is steadily taking over backend development.


Thankfully, it is enough that it is a must in DevOps space.


It's not yet an enterprise language; along the languages/frameworks, there's a huge ecosystem of QA, monitoring, security, etc solutions in the Java/.NET space that just isn't there in Go.

I mean there's a slow shift, I keep hearing of "rewriting a Java codebase to Go", but it'll be a slow process. And especially the bigger projects that represent 30 years of Java development will and should be reluctant to start rewriting things.


Clutter death that creeps in any language, as I see it. To this day, only a small subset of JS syntax is used; I cannot recall anyone besides me ever using method chaining like .map .filter, etc.

There is a reason why almost every good language is somewhat akin to C to this day, and maybe people started over to get rid of the noise.


> I cannot recall anyone besides me ever using method chaining like .map .filter, etc.

Interesting. That's all anyone on my team ever used once it became available.

But, it highlights the point with Go. Working with Go for just a little while means I can easily read and work with nearly any Go project with a very short ramp up.


Haha interesting. I actually like .map/.filter/.find/etc. in JS and comprehensions in Python. I find they communicate the intent of the programmer really clearly as compared to a for-loop.


> The reluctancy to introduce new syntax too quickly (looking at you, TC39 and Babel) makes go an almost maintenance free language.

Could you provide some examples of this? From knowledge of the pipeline operator proposal[0], moving fast and breaking things isn't always a priority.

It goes without saying that Babel is an external collection of modules that don't fall under the TC39 umbrella, so they're able to iterate at a much greater cadence than the official specification (and you obviously need to explicitly opt-into using it.)

[0]: https://github.com/tc39/proposal-pipeline-operator/commit/da... (first commit; Nov 9, 2015, which is 8 years, 8 months, 24 days ago)


Well, if you compare against the absolute bottom of the barrel, it's not too hard to look good.


> Unified codestyle

The moment I got my first compile error due to an unused variable and an unused import, made me realize Go is not a serious programming language.


The moment you see that you had misspelled a variable name by a single character when the compiler warned you about an unused variable, you'll realize Go is a serious programming language.


This has saved me minutes of confusion so many times.


I wonder who hurt them, that is, how much did unused variables/imports hurt at Google for them to make those a compiler error?

Insofar I know it, I can imagine C/C++ caused some issues like that because it's hard to figure out whether an import is used, but an unused import does have a cost.


https://research.swtch.com/deps#watch_your_dependencies

> Creeping dependencies can also affect the size of your project. During the development of Google’s Sawzall—a JIT’ed logs processing language—the authors discovered at various times that the main interpreter binary contained not just Sawzall’s JIT but also (unused) PostScript, Python, and JavaScript interpreters. Each time, the culprit turned out to be unused dependencies declared by some library Sawzall did depend on, combined with the fact that Google’s build system eliminated any manual effort needed to start using a new dependency.. This kind of error is the reason that the Go language makes importing an unused package a compile-time error.


I think the fundamental reason behind this behavior is just Go's rejection of the concept of compiler warnings. Unused variables must either be A-ok or trigger an error. They chose the second option for unused variables. Some other code smells are considered A-ok by the compiler and need to be caught by linters.


The problem with such rejection is that every serious project written in Go has invented compiler warnings outside the compiler, with linters.


Why is it a problem?

`go vet` is part of Go toolchain so the designers very much understand and acknowledge that code can have issues that are not always errors.

The distinction they made is very simple: an error is something that is always wrong and a vet warnings is something that is possibly wrong but not always.

They made a judgement call to split the responsibility: compiler only reports errors, other tools, including `go vet`, can tell you about other issues.

For example: passing a large struct by value to a function is potentially a performance problem but it's also correct code.

If you ever tried to compile C++ code with different compilers you would know why it's a wise decision.

The set of warnings is vast and not standardized so you take C++ source from project. It compiles for them but doesn't compile for you because you use different compiler or enabled different set of warnings. At which point you either try to get the other project to "fix" something that isn't an issue for them or you do stupid, pointless, time consuming work adjusting your build system.

The same would happen in Go and it would be a reusability killer. You use some library in your program but it doesn't compile because you decided to use more strict set of flags.

Lack of warnings switches also simplifies the toolchain. In go it's just `go build` and it works.

In C++ you have to write some kind of makefile because everyone executes `cc` with different set of flags.


Is this a problem? Linters are also used with lots of languages that do have compiler warnings. Go just removes the no man’s land between honest-to-goodness compiler errors and linter warnings about code smells. I see no issue with having the latter handled entirely by tools designed for the job.


Why not? Pretty sure it's used in a lot of places, so it does seem quite serious if a language.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: