Hacker News new | past | comments | ask | show | jobs | submit | wakaflockafliz's comments login

+1 for Jonathan Livingston Seagull - Fantastic book, and a very very easy read to boot!


That is certainly part of the inspiration and legacy behind serve2d.

This advantages of serve2 is that it is 100% go and can be integrated and used without any extra dependencies or multi-setup :)


So here's what hopefully won't be considered a trolling question: I have seen a lot of "100% Go" projects over the past several years and that's usually presented as a big feature. Some pretty trivial things have been redone as brand new in Go, and then suddenly gain lots of attention. What is so magical about a project written in Go vs C, Python, Ruby, Rust, JS, etc.? As a user of the software I won't care what it's written in, if it's done well. If it's done poorly, I am much more likely to look for better alternatives than to fix it (if only I had about 240 hours in a day...), so what's the advantage?

To me, an advantage in usability is having a PPA with properly built .deb packages. If I have to use a language-specific package manager that I don't already use regularly, you've likely lost me, unless I really need this functionality. If it doesn't come with a proper daemon mode (correct forking, PID file support, proper file or syslog logging), sample config file, man page, or an init file, that's even worse. I am much less likely to use this in any type of "production" environment if I have to maintain those pieces myself. Running things in a screen session is so "I'm running a Minecraft server".

That is not to criticize your work. You've done a great job! serve2d looks very interesting and I might actually have to give it a try sometime.


I think the the "100% Go" stuff is appealing in that you just have one file that works across OSes, with minimal screwing around.

For things that become part of the OS, yes, I'd rather they come via some install approach that includes the necessary integration. But for anything else, I think a lot of our packaging approaches are dedicated to saving disk space and RAM, which is something that matters way less to me now than it did 15-20 years ago when CPAN and APT were designed. In 2000, disk prices were circa $10/GB [1]; now we're looking at $0.50/GB of zippy SSD [2] or $0.03/GB of spinning rust [3]. RAM is similarly about 2 orders of magnitude cheaper. [4] Given that, it makes a lot more sense to burn space to minimize the chance of a library version conflict or other packaging issue.

Another thing that has changed greatly is the pace of updates. 15-20 years ago, weekly releases sounded impossible to most. Now it's common, and some places are releasing hourly or faster. [5] Thanks to things like GitHub, the whole notion of a release is getting hazy: I see plenty of things where you just install from the latest; every merge to master is in effect a new release.

Given that, I think both Go and Docker are pioneering approaches that are much more in sync with the current computing environment. I'm excited to see where they get to.

[1] http://www.mkomo.com/cost-per-gigabyte-update

[2] http://techreport.com/review/27824/crucial-bx100-and-mx200-s...

[3]http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&Is...

[4] http://www.jcmit.com/mem2015.htm

[5] www.slideshare.net/InfoQ/managing-experimentation-in-a-continuously-deployed-environment slide 27


The number 1 selling point of something written in Go is that it's much easier to package. The result of a compilation is a standalone binary that can be copy-pasted everywhere, as long as the architecture matches what was input at compilation time. This means:

- no more having to deal with dependencies at packaging time, which makes packagers' job simpler because all they have to care is the one and only standard way to retrieve dependencies and build the binary. (Much like the standard way of doing things in C would be ./configure && make && make install, with the added bonus point that the dependencies are also taken into account). This also means that there's a higher chance that the software will be packaged in the distribution of your choice, because the bar is lower

- no more having to deal with dependencies at runtime, because each binary has everything it needs inside of itself. In practice this means "scp as a deploying method". It's an even lower common denominator than packages.

> If it doesn't come with a proper daemon mode (correct forking, PID file support, proper file or syslog logging), sample config file, man page, or an init file, that's even worse.

This is orthogonal to the choice of programming language, though. On top of that, I believe the application shouldn't deal with forking, it's the job of your supervision system to deal with daemons. All an application has to do is log whatever happens on STDERR and let the system handle that.


How exactly do you not have to worry about dependencies? Does the Go SDL include every routine you could possibly need and is always 100% correct and bug free? If you build your static binary today and tomorrow there is a vulnerability in your libssl dependency of choice, don't you now have to recompile and redistribute a new binary? Seems like a terrible and insecure way to do things. Instead of a distro developer worrying about security updates, you have signed up to do that yourself.

As for logging, there are loads of logging libraries that support both stdout and file logging. My policy is to support both for my project and it has been almost no burden so far (in Python and in C). Not everything is containers, and having a feature like logging does not mean it cannot be used in a container.


> If you build your static binary today and tomorrow there is a vulnerability in your libssl dependency of choice, don't you now have to recompile and redistribute a new binary

Technically there's only one ssl library you should use, it's the standard one. This doesn't change your overall point that when a part of the program must be upgraded, the whole binary must be upgraded as well and re-deployed, which I totally agree with. If your software is a server that you host yourself and you have full control over the deployment chain, as is the mindset behind Go, then re-deploying a dependency or re-deploying a binary is more or less the same.

Regarding logging, I'm really partial to the approach advertised by 12 factors (http://12factor.net/logs): let your software handle the business, let the supervisor handle the software's lifecycle, and handle the logfiles outside of the software, because there are factors specific to the hosting machine that in my opinion shouldn't be the concern of the software.


Not sure what you mean by standard ssl library. There's OpenSSL, GNU TLS, LibreSSL, and a number of other implementations. If you mean the one that comes with the OS, then wouldn't that by dynamic linking?

Re-deploying the binary, and getting the updated binary are two different things. When OpenSSL has a vulnerability, they announce it after the fix is out, distro maintainers release updated packages, I run `apt-get update && apt-get upgrade` or equivalent. It is on OpenSSL and the distro maintainers to release the update, and on me to apply the update.

When we are talking about static linking, it's now suddenly on the software developer to release a new binary or on me to rebuild the binary I have from source. Now I have to keep track of (a) which dependencies each project uses, and (b) which vulnerabilities come out. Not being familiar with Go, does it have such dependency tracking framework where I can update all packages where dependencies have been updated? Of course once I know that I need to perform an update, it doesn't matter if I run `apt-get upgrade` or `go build foo`.

As for logging, I advocate doing both. I really should make a separate blog post about it, but here's what I expect a well-behaved daemon to do:

- Always start in foreground and log to stdout (otherwise it seems like it exited without any output)

- Use the -d and --daemon flags to go into background

- Use the -p and --pid options for specifying the PID file

- Use the -l and --log options for specifying the log file ___location (if not specified or is - use stdout)

- If uses a config file, use -c or --config for the ___location of the configuration file. Default to standard OS ___location.

This way all possible modes are supported (running under a supervisor process, in a container, as a stand-alone daemon, or in the foreground while in development/testing/experimentation), and it is very easy, even in C to write software that meets these requirements.


A bit of dynamic linking reading: http://harmful.cat-v.org/software/dynamic-linking/ http://harmful.cat-v.org/software/dynamic-linking/versioned-...

Dynamic linking is not synonymous with security or ease of use. It's known to significantly reduce performance, both when loading as well as runtime whenever an external symbol is used, and the memory savings aren't too great. One has to remember that static linking also means that unused symbols are left behind, as the compiler knows what is needed.

Ability to update components of an application is one of the ideas behind dynamic linking, but in practice it doesn't work that well, and often requires that the application is updated to link against a newer version, which of course can only happen when the distro updates. This also includes LTS distros and backports, which you either have to wait for or kill support for.

There's also a difference in how Go vs. things like C and C++ handles linking, due to Go actually knowing about multiple files, and who uses what. This is quite a bit different than the copy-pasta that C preprocessors generate. We just ported a major product from C++ to Go, which gave us significant performance boost with considerably less code and complexity (This is not a "Go is better than C++", Go just provided a lot of things we needed in the std library that were hard to get in C++). The "necessities" were dynamically linked (libstdc++, libgcc_s, pthread, libc, ...), but our own libs were statically linked in. The binary was 51MB. The equivalent, completely statically linked Go binary is 8MB. It also does a clean build in <500ms, rather than the 3-5 minutes it took for the C++ project.

... But do remember, that Go 1.5 brings dynamic linking for the ones that want it. While the Go creators don't seem super fond of dynamic linking, they are providing it.

Regarding updating, "go get -u packagename" will update all dependencies, assuming the dependencies are go-gettable. Vendoring changes the picture a bit, in the sense that applications will bundle their own version of things to simplify things a bit, but that doesn't really change the go get -u command. Follow with a go build, and your application is up to date.


Just to clarify, I am not saying anything bad against Go. It seems like a strict improvement over C++ in a lot of ways. I am simply arguing that in the world where distros are made up of third party software and are maintained by a small team of distro developers/maintainers, dynamic linking is better than static linking. It's not synonymous with security, nor is it the panacea of performance or memory saving (both of which I personally care less about than flexibility). It is simply more convenient from the point of view of a distro maintainer, and because of that the end user.


I understand, I just don't see where it is an improvement.

My rant is mainly triggered by dynamic linking being the standard without many people questioning the usability. It rarely it works as intended, especially with versioned symbols. If I depend on openssl, and an update comes in, then one of 3 things can have happened: 1. They updated the old symbols 2. They implemented new symbols, but left the old ones behind 3. They implemented new symbols, killing the old ones.

1. means that the behaviour of the library under my application changes, which can lead to unexpected bugs. 2. means that my application is not experiencing the fix at all. 3. means that my application won't start anymore, due to dyld errors. 3 is what happens when you update something to a new version in the normal manner.

This "multi-version" mess also makes the libs more complicated than they're supposed to be. My ubuntu libc, for example, includes symbols from 2.2.5, 2.3, 2.3.2, 2.4, 2.5, 2.7, 2.8, 2.11, and 2.15, just to check the very last symbols. It's a mess.

For users, it's mainly a headache when trying to get newer versions of packages that depend on newer libs. This isn't much of an issue if you're, say, a gentoo or arch linux user, but if you're maintaining Debian systems, and need a package from a newer repo but can't/don't want to dist-upgrade to testing/experimental, then you're practically screwed short of compiling things yourself.

For distro maintainers, it's a mess as all packages depending on the lib for that distro release needs to be recompiled when releasing a new version of the lib, which is a lot of work.

Dynamic linking and versioned symbols is also the very reason that causes sites with binary releases to have a binary for Windows, a binary for OS X, a binary for Ubuntu 14.04, a binary for Red hat 6.3, a binary for Arch Linux, a binary for..., further increasing the inconvenience for the user.

The only time you have benefits from dynamic linking is in the very rare scenario 1 of updated libraries, where everything is done exactly right when modifying old symbols so nothing breaks, which is a bit unlikely unless the bug fix was very simple. It also has to be serious enough that the library maintainers see the need to backport the bugfix to the old library versions, rather than release a new one. Otherwise, it's only dragging things down, both in performance, resource consumption and maintenance overhead.


> This also means that there's a higher chance that the software will be packaged in the distribution of your choice, because the bar is lower

Static linking and bundling of dependencies is a no-no in most distributions. If anything, the Go model is a headache for package maintainers to deal with.


> no more having to deal with dependencies at runtime

So, it is the same that linking the libraries statically? C and C++ has done that like since forever.


Have you ever actually tried producing a statically linked C/C++ binary? I've been programming in C/C++ for 10+ years. Static linking is a huge pain. My latest efforts have led me to create holy build boxes inside carefully controlled Docker-based environments just to be able to produce binaries that work on every Linux. With Go you can just run a single command to cross-compile binaries that work everywhere. Minimal setup required, no expert knowledge required.


> Have you ever actually tried producing a statically linked C/C++ binary?

Many times actually, I prefer to deploy just one file whenever I can.

> just to be able to produce binaries that work on every Linux

That's a Linux design/decision thing. Linux binary compatibility is ... well ... challenging. In Windows is not hard at all (Not sure how is it in MacOS since I have worked mostly for iOS in Apple's world).


That sound like pretty strong agreement, unless you are just targeting Windows?


Very true, I had python and ruby in mind. What Go makes different still is that static linking is mandatory.


Hey, if you said "written in C", I would be similarly excited about its ease of deployment.


Go produces a single static binary. Consequently deploying go apps is as simple as copying a single file around, that just works.


That's a very weird way to look at it. Static linking was there before everything else. gcc/ld and, well, any other C/C++ toolchain can do that as well. There is a reason this isn't usually done. It's like you are trying to spin a bad thing into something good.


The reason this isn't usually done is that executable size was significant relative to storage capacity up until the early 2000s or so, and people tried to economize by deduping common parts of their executables via shared libraries / DLLs. This worked well enough to catch on, but came with an extremely high cost in added complexity, and over the years a whole layer of additional infrastructure was created in order to manage it. The industry progressed, storage capacity grew dramatically, and executable sizes stopped mattering, but the use of shared libraries / DLLs continued out of inertia. As time passed, people started asking - why are we doing all this? And some of them invented a reason, which was the idea that one could swap out pieces of existing executables after installation, and thereby fix security problems in an application without needing to involve the application's developer in the process. This works about as well as you'd expect if you had spent years trying to fit all the rough edges of various third-party libraries together with varying degrees of success, but the idea caught on as a popular post-hoc justification for the huge layer of complexity we're all continuing to maintain long after its original justification became obsolete.

As is no doubt obvious from my tone, I'm not buying it and am very happy to see signs of a pendulum-swing back toward static linking and monolithic executables.


It's hard to be sure why exactly shared libraries and dynamic linking appeared. Your explanation about reducing file size for smaller HDD and RAM footprint is probably one of the reasons, but I don't believe it's the only one - I don't remember many shared libraries from MSDOS days (where with 2MB of RAM and 40MB of HDD, storage was really scarce). In fact - I don't remember any libraries! To run Doom you just borrowed the floppies from a friend and it would 100% work. The same for Warcraft 1.

I believe it's more similar to Database Normalization from RDBMS world than to anything else. And the most important objective of Database Normalization is considered "Freeing the database of modification anomalies".

My own experience with shared libraries is pretty positive. I have fixed OpenSSL vulnerabilities many times by just updating the OpenSSL library and restarting all services. Compared to my own experience with docker where after waiting for a few weeks I had to change my base images (as nobody was updating them) updating just a single shared library and having the vulnerability fixed is way easier!

This, of course, is true if those who maintain the software you use do care about backwards compatibility (which tends to be true for the "boring" stuff and false for the stuff considered "cool" - looking at you nodejs library developers who break my build at least once a month).


There are other important downsides to static linking. Namely, critical security updates to shared components. It's better to have to update your tls library once per system, than to update every app that came with it. And that's the best case scenario where the developer notices and the app is actually repackaged.


Yes, that's the argument I was referring to, and as I said in the comment you're replying to, I don't believe that the cost is worth the benefit.


Well, you wouldn't want every executable to copy some unknown version of OpenSSL, and you'd get into all kinds of problems if you had several different versions of glibc around.

But for most libraries it may really be overkill.


I see OpenSSL and libc as being effectively the "system version" as you would know it in Mac OS or Windows. It's OK to link dynamically against the operating system platform, but generally a program knows what version of the OS it is built for and expecting to be compatible with. Users know that upgrading the OS is generally worth doing but may break things.

What we lack in the unix world is a coherent division between what ought to be a very small, stable, well-understood set of fundamental system libraries suitable for dynamic linking and the vast array of utilities a developer might choose in order to get an app built without having to reinvent everything from scratch.

Upgrading libraries out from under a built, tested executable is not something we should be doing lightly, because there is no possible way to know in advance whether the apps depending on the library have succeeded in programming to the interface rather than the implementation.


Just because you don't like the single binary that works everywhere, doesn't mean that others find it a problem. One approach doesn't fit every possible situation.


Even if you think the wasted ram and the security issues isn't a problem, why is that an argument especially for Go, when almost every other language can be built into a single static file as well?


Because it is the normal, and only way for Go. Static linking isn't as easy for other language platforms, as some issues crop up (a google/SO search shows many questions). Often it is as simple as not having the static libraries available, or having difficulty linking with them because they still want to dynamically load other libraries.

ie other languages may not work, is less tested, and not normally done.


The original authors of dynamic linking concluded that the cost was way higher than the benefits, both in memory usage and general performance, but the client demanded it. Dynamic linking is the number one binary compatibility issue on Linux.

Go 1.5 has mechanisms for dynamic linking, though.


Believe it or not, you can freeze binaries for python apps too, if you like: https://wiki.python.org/moin/Freeze


I've done it for Windows, Linux and Mac before. Note that these solutions freeze the Python side of things but do not freeze the platform side. For example they do not include the system libraries. Consequently running the frozen python app on a system that is a different distro, older or newer OS version, or has different system packages installed often leads to the frozen python app not being able to start.

Slide 19 of this if you are interested: http://www.bitpim.org/papers/baypiggies/siframes.html


@boomzilla: Go head and try it IRL. It's a huge PITA and there are plenty of gotchas.


As I alluded to below, the power of this feature should probably not be underestimated.


A staid, solid, conservative outlook, a good perspective for others to realize is out there. I would say that concerns like daemon mode and logging are a lot less in vogue these days- a program ought concern itself with running, and outputting to stdout, and if you have needs past these it's expected you have tooling you can deploy that makes that happen.

Daemonization is at least a fairly standard feature, but with logging there's so many people with such varied concerns that getting fancy, trying to meet people's many needs, can lead to a lot of program bloat very quickly. Instead of going at these on a case-by-case basis, and now that we are more container-centric, it makes sense to run in the foreground and put your output on stdout, let the rest of the system support that utterly uncomplex pattern.


I think we're learning pretty quickly that 100% Go (or Rust, or Python, or Perl, or OCaml, or ...) is a good idea for security. Especially is you're dispatching between ssh and ssl services.

Go has advantage over scripting in speed, over C/C++ in memory management, over strict FP languages in popularity, and over Rust in being stable and known for longer.


”This advantages of serve2 is that it is 100% go and can be integrated and used without any extra dependencies or multi-setup :)”

Great! This seems to be a very good Selling Point for go!



That looks like a start but implements only a fraction of the usefulness.


I think it could be at least something to talk about if you have nothing else to discuss with someone.


Did you get the idea from a 70's copy of Cosmopolitan?


Are you looking for feedback? If so, here is mine:

---

Forgive me, but this doesn't look easy.

Tons of JSON in bash.. not fun.

And no pricing link.

And no practical useful examples in the article.


Thanks for the feedback!

We support JSON in the CLI so users can access the response data with tools like jq (http://stedolan.github.io/jq/), but leaving off the json parameter will respond with human readable output.

The pricing is regrettably not up yet, but we'll definitely work on some other examples.

Cheers! Tim


Are you saying PG/YC is in the business of being charitable? I think that behavior only emerges when the average payout is larger than what they put in..


How do you explain YC investing in Non-profits like Watsi? There is more to YC than the payout. There is definitely a core goal to help good people build better products. Money just usually follows.


Easily explained: TAX DEDUCTIBLE!

Sorry, but I am not buying into the theory that "money just follows". That's a sweet pipe dream and all, but not a reality I'm familiar with.


I don't follow. A donation is a net loss even after the tax deduction. There is no profit from funding a non-profit, even if you can take a tax write off.

If you want to be cynical, good PR would be a much better explanation than something involving taxes.


Please forgive me if the title isn't 100% accurate. After thinking about it, this possibility of combining both native and web view rendering seems like it could be an interesting approach for a wide variety of potential use-cases.

Are there any nice tools out there for creating apps that render to both native + web?


Create HTML app and pack it in some kind of shell (chromium embeeded, atom shell, nodewebkit)?


GTK3 has a HTML5 backend: https://developer.gnome.org/gtk3/stable/gtk-broadway.html

But i think you still need all the libs and stuff.


I didn't notice this ever before, then I read your comment. Now I see it all over the place...oh man.


For Spotify, there is also Artem Gordinsky's fabulous Spotifree:

https://github.com/ArtemGordinsky/SpotiFree

http://spotifree.gordinskiy.com/

DISCLAIMER: This has nothing to do with command-line anything, it's just a useful project I discovered and wanted to share.


For reference, the FlashCache project and source code is publicly available on github: https://github.com/facebook/flashcache/


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: