I discovered Erlang in the late 2000s via its kinda weird and iconic video "Erlang the Movie" https://www.youtube.com/watch?v=xrIjfIjssLE that prompted me to dig further. I was at the time an average developer. Erlang, and Joe Armstrong books and videos, made me understand things that other web-oriented languages ( Ruby, PHP... ) never really dug into at the time.
Because the Erlang ecosystem is more than distribution. Distribution is a consequence of its design around concurrency and message passing and let-it-crash/auto-healing philosophy. Erlang, and after that Elixir, made me think differently about code and made me a better programmer.
Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.
I expected a bit more of content in the article, like which components of kubernetes equals which OTP functions or general patterns and how easy can be to implement your mini-kubernetes on Erlang, not just “hey they both solve the problems of a distributed by using similar patterns”.
I wish I could downvote but this feature does not seem available in my UI, I guess only certain users can downvote.
The main difference sadly break most of the analogy.
Because a pod is far from the granularity of a process. It is closer to the granularity of an application, the biggest abstraction that OTP gives you.
What that mean is that k8s does not have the same property than erlang. In erlang, isolation is your go to tool. If you need something, you spawn a process. Having a lot of really granular process makes scheduling on cores easy, makes your design easy to crash because the blast area is small, and makes it really cheap to have message passing in the core of the VM.
In k8s, any networking is hard, hence the current trend of service mesh. As the unit of computation is bigger, supervision get harder. Building a supervision tree is more expensive. And crashing and restarting get more expensive too.
K8s has the advantage of being language agnostic ofc. But do not forget the paradigm shift that having really cheap process give you in an integrated environment like the BEAM.
And ofc, i have not even talked about the dynamic tracing that the BEAM gives.
I recently went half way in a project using Erlang back-end but didn't get to a production stage mainly due to political reasons in my organization. Mnesia (Erlang’s distributed database) is remarkable and in the Erlang way it comes completely packaged with the system. My only gripe is that documentation overall is a bit hard once you start digging in. The OTP book mentioned is good but may be a bit dated (but we can figure out what is amiss still). I am not sure whether drivers for accessing external databases such as Mongo, Postgresql etc are actively maintained. (Edit: when I check it now it looks like the drivers are recently updated!)
A nice application oriented book is the 'Handbook of Neuroevolution Through Erlang' by Gene I. Sher, from Springer. It has the code and a model for distributed agents (for neural networks in biology)
>>Elixir depends on the decades of development behind the Erlang VM and in many ways is just syntactic sugar on top of Erlang/OTP
I think calling Elixir "just syntactic sugar" is a gross and unfair oversimplification. While friendly syntax is one of its welcoming qualities, Elixir is its own language. It doesn't "transpile" into Erlang. It compiles down to bytecode.
Elixir creates an AST which is transformed into Erlang Abstract Format (http://erlang.org/doc/apps/erts/absform.html) which is then compiled into Erlang bytecode by Erlang compiler.
I agree with your post because Elixir has brought so much beyond syntax to the BEAM such as tool aesthetics and community building. However, Elixir does not compile to BEAM bytecode directly. The compiler emits Erlang terms first which are then compiled to bytecode.
Eventually I try to do something with Elixir. Then suddenly everything is so good.
The whole experience how I create a new project with `mix`, get dependencies, build, compile, run `repl` was so much better than rebar.
To me, `mix` is the greatest tool. I don't see something like mix in other languages, except Clojure and Leiningen.
The community around Elixir is awesome too. It's very different from Erlang type. I have a hard time to find library and stuff in Erlang before. I don't have a central place to look for that. Elixir brough me hex.pm.
I personally like Erlang syntax, I think it's short and concise. But I end up like Elixir much more than Erlang.
Does Erlang support structural sharing between large data structures? If not, how is it possible to efficiently pass large messages between processes on the same physical machine?
The only thing that is passed "efficiently" between processes are large binary blobs. Everything[0] else is copied. Up to a point, this is efficient, as a copy decouples sharing and this is really good for modern caches, and it is good for the garbage collection.
However, when the data becomes too large, Erlang programmers often choose to bring the computation to the data rather than the other way around. Passing a function is efficient. Keeping data cache-local in a process is efficient.
Passing on a local machine is optimized quite well. The API is the same as passing to a remote machine, but serialization is omitted when passing locally. A fair amount of copying still happens, but (I assume--now we're into territory that I am less sure about) that keeps the implementation robust and simple (i.e. there's no "ownership" to wrangle down in the BEAM; at the highest level of abstraction inside the VM, it's just copies of immutable structures).
For communicating quickly locally, you could use large/refcounted binaries (http://erlang.org/doc/efficiency_guide/binaryhandling.html) in to avoid copying-on-message-pass. Those generally tend to be frowned upon, and incur the usual headaches/perf issues of a ref-counted structure as well.
In short, I'd suggest defaulting to the assumption that Erlang is fast enough at passing messages locally for it not to matter that much in practice. Benchmark the throughput of your system locally, and if the overhead of message passing isn't a major pain point immediately, move on up the stack. If you're doing scientific computing and want to number-crunch huge data sets in parallel via some minimal-coordination memory-map-backed system, this is not the platform for you (or use a NIF).
How is it optimized well when there is still lots of copying going on? Anything in the same process space should just be a matter of passing a pointer or a pointer with very little information wrapping it up.
> Those generally tend to be frowned upon, and incur the usual headaches/perf issues of a ref-counted structure as well.
Why would reference counting be worse than copying?
I think what is frowned upon is serializing valued into a refcounted binary instead of passing around native terms. If you pass around the same value, you could probably save some copies that way, but it makes for larger code.
Nobody will complain if it's needed, but chances are, it's not needed. (It depends on what you're doing though).
Plus refcounted binaries scare people. Until recent versions, references didn't play nice with GC targets, so it was easy for a very efficient proxy process to not trigger GC but still keep references to a lot of garbage. If you're not careful with referencing pieces of binaries you can do something similar too -- maybe you got a huge binary, but only need to save a few bytes, it's easy to keep a reference to the huge thing (hint: use binary:copy/1 to get a clean copy of the small piece that you want to keep in your state or ets)
Because refcounting means lot of cache miss on the CPU core.
One of the interesting advantage of erlang is that its processes are small and mostly fit in L2 memory. So you can load a whole process at once, then compute for your whole slice, all in cache.
Why would reference counting increase cache misses?
> One of the interesting advantage of erlang is that its processes are small and mostly fit in L2 memory.
Are you talking about data or are you talking about execution? I wouldn't think execution would need reference counting.
> So you can load a whole process at once, then compute for your whole slice, all in cache.
Whatever data is being loaded still has latency. If it is accessed linearly it can be prefetched. If it is being copied around, that won't won't put it into a CPUs cache any faster and will require using writeback cache, not to mention all the cycle of copying.
I'm not sure how data that already exists in memory and potentially in L3 caches, etc. is going to be hurt by reference counting.
> The result may surprise you.
Do you have results that I can look at? If not I think they will surprise both of us.
> How is it optimized well when there is still lots of copying going on? Anything in the same process space should just be a matter of passing a pointer or a pointer with very little information wrapping it up.
Sure, you could pass a pointer around. But then you'd have all the usual headaches about concurrent modification (some of this stuff is mutable when you get down to the implementation/pointer level) and garbage collection (who has it?).
It's optimized well enough that the perf penalty is minor. Copy-on-write semantics, "tail-send" optimizations (I don't know what the technical term for this is, but: data or pointers are moved if a message send is in particular positions in code, without getting the GC involved at all until later) and more are all used where they make sense.
Could it be faster if you built it around a zero-copy runtime? Sure. So could Perl. But then you'd be building your own runtime, with different priorities and tradeoffs than the one that already exists--one whose biggest benefit is that you don't have to build it yourself.
Erlang doesn't promise zero copies locally, but nor does it take the most wasteful possible path; I am not a deep BEAM expert, but am consistently surprised at how well it works. Seldom have I wondered "hmm, the runtime could do something fast/cool with this particular type of code; I wonder if it does..." and, upon checking the available sources/debugger output, been disappointed.
The runtime isn't built around hyper-efficient zero copy semantics (it has a garbage collector, an M:N scheduler, a bloody instruction counter routine that runs every time you invoke any function at all for heaven's sake). For all that, it's more than fast enough for anything besides high-performance scientific computing/number crunching locally. If you need to do that, use a language that lets you sent pointers between threads exactly how you want. If you still want to use that in Erlang, add some reduction-count markers in your $fast_language code and hook it up to Erlang via an NIF.
The BEAM doesn't waste time/complexity more than necessary on hyper-efficient local optimizations so it can focus on its main strengths: really good concurrency and parallelism, and making a near-bulletproof, super-high-performance set of primitives for distributing your code with minimal changes. Of all the "you call this function as if it were local, but it actually runs remotely in nearly the same exact way!" tools out there, even ones invented recently with the benefit of history to learn from, Erlang/BEAM's implementation comes the closest to fulfilling that promise IMO.
> Why would reference counting be worse than copying?
Refcounting is a valid strategy in some cases, but has drawbacks as well. It doesn't handle cycles well, and requires some stop-the-world events (or lots of locking complexity) for garbage collection in concurrent environments. More details on Google, this thread goes into the differences reasonably well: https://www.quora.com/How-do-reference-counting-and-garbage-...
I believe that Beam implements copy-on-write so unless you create a copy with modifications it should minimize copies. values are immutable so this eliminates a lot of the need for copying during the message passing.
When you pass a message, all the terms will be copied into the memory space owned by the receiving process. Refcounted binaries don't copy the value, just the reference of course (as well as updating the reference count information). This results in a lot of copying, but also makes process memory management very simple: when a process dies, you need to clean up the global state: decrement refcounted binaries, send notifications for linked processes and monitors, and release all the process heap memory. You don't need to inspect the process memory or worry about dangling references to it.
Refcounted binaries are (mostly) effectively copy on write though; if you create a term that's a modified form, a new value will be formed.
BEAM does not share structures like this. It would make garbage collection much harder. Immutability and shared nothing makes the GC simple. You couldn't compact a process heap if parts of it were in use by other processes.
Sometimes you want to set up an efficient processing pipeline on a single machine, e.g. when writing a database engine. In that case, you don't want to serialize all communication between processes. Of course, other processes (not related to the efficient pipeline) can live on different machines.
Who said anything about serializing? To coordinate multiple processes via a file-backed interface, you still might have to wait for the disk (unless you're using /dev/shm), but you don't necessarily have to serialize/deserialize data. Most people tend to confuse the two.
You can cast bytes from a memory-mapped file just fine, if you trust the file/other writers, or are willing to sacrifice some safety. There are also a lot of serialization systems that are simple/fast enough as to basically be structs, e.g. flatbuffers: https://google.github.io/flatbuffers/. Even those still don't avoid that pesky copy, though; for that you'll have to interact with raw memory and hope it's in the right layout for your purposes.
> To wrap up, it's evident that much of the work that went into Erlang has inspired the work of Kubernetes.
Even if there are clear similarities between Erlang and Kubernetes, I'm not sure the former inspired the latter. We should ask Kubernetes developers :-)
Kubernetes is based on borg which as far as I know wasn't really inspired by erlang necessarily. However I suspect the problem space lends itself to idenpendent discovery of very similar solutions.
In large Erlang applications, the "send to multiple peers" broadcast tendencies of its message passing can indeed cause the network to become a bottleneck; so can the default chattiness that a lot of Erlang application developers assume for their app behavior.
Fortunately, there are lots of simple-to-apply patterns and solutions when you get to that point, for example the excellent Partisan library: https://github.com/lasp-lang/partisan
Sure. As with any other distributed processing, if you have enough other resources, you can run out of network. Or if something happens to reduce capacity of the network, it's going to constrain the rest of your system.
It depends on what you're doing. At work we have several systems that needed to upgrade to 2x 10G Ethernet (from 2x 1G), but not much pushes that limit at the moment. Our newer hosting is more badwidth with smaller CPU and RAM, so we expect to see more CPU constraints than network constraints.
if you push your design to the limits of scalability, then you gonna hit a wall at some point (IO, memory or CPU) and as long as your application is not based on CPU intensive calculations (simulations, calculus, ...) most of the time IO is the bottleneck, network or disk.
However, I am not that experienced in distributed applications and would appreciate reading the opinions from more experienced engineers.
Yes! Each node has a name and you would look into the functions on the Node module on how to do that. In Elixir you'd say:
iex(3)> defmodule Hello do
...(3)> def world() do
...(3)> IO.puts "here we go!"
...(3)> end
...(3)> end
iex(4)> Node.spawn_link(:erlang.node(), fn -> Hello.world() end)
here we go!
#PID<0.101.0>
If you like the O'Reilly format then I liked Designing for Scalability with Erlang/OTP. I thought it was light on the details beyond scaling beyond single node OTP apps but it covers a lot of the built-in Erlang functionality pretty well