On a more micro level, I find it very hard to write good documentation. I always forget something that once pointed out seems obvious. Or worse, the reader is missing some important context that many other readers are already privy to. Not to mention, some people don't even seek out docs before acting.
I imagine this gets amplified in a large org. The docs are lacking, people might not read them anyway, and you get an explosion of people who don't understand very much but still have a job to do.
It's always crazy to me that you can build crappy websites in the most flexible environment imaginable and make way more than those doing the actually deep and challenging work required for those websites to run in the first place.
> SSE enables microsecond updates, challenging the limitations of polling in HTMX.
How is this true? SSE is just the server sending a message to the client. If server and client are in opposite sides of the world, it will not be a matter of microseconds...
You can have microsecond updated, once the connection is established you can stream. Regardless of your latency.
Say your ping is 100 (units are irrelevant here). It will take you 100 before you see your first byte but if the server is sending updates down that connection you will have data at whatever rate the server can send data. Say the server sends every 10.
Then you will have updates on the client at 100 110 120 130 etc.
That's still 100 irrelevant units later than the server sent the update. This is like saying the first byte of the packet takes 100ms to arrive but the subsequent bytes in the packet are instant!
It's not quite right. You'll never have updates in microseconds even if your ping is, say, 7ms.
At best you can be ~2-4x as fast as long polling on HTTP/1 -- an order of magnitude is a ridiculous statement.
Well obviously there's a difference between latency and throughput. Of course it's going to be microsecond plus your rtt/2. Sorry, we can't beat physics.
In a way, you can with optimistic updates. That requires having a full front end stack, though, and probably making the app local-first if you really wanted to hammer that nail.
There's always the cost of the round trip to verify, which means planning a solid roll-back user experience, but it can be done.
Latency doesn't affect server update rate it affects time to first data. I can have a ping of 500ms and still get an update from a stock ticker every 5 milliseconds. They will arrive at 500 505 510 etc.
I'll take your word for it, as I can't find the quote in context.
But I'm unfamiliar with any polling pattern where poll requests are expected to overlap. If updates take microseconds, does that mean I can comfortably run 10,000 of these in a second?
I even think datastar looks cool, but I just think that quote is misleading, and I still think that.
I'd like to see some realistic results, like the kind measured by this[1] type of benchmark. "Microsecond" updates sounds like microbenchmarks with very carefully crafted definitions.
It's not clear to me how these are dark. You aren't being tricked, and all of these traits might be in the game client itself as well (game could phone home, auto update, fail to auto update, be slow, etc)
So basically virtual machines, those we can spin up with lxd or firecracker. Not that they don't have file access, it's just that's finnicky compared to containers (I'm thinking docker/podman)
One can use something like https://github.com/google/gvisor as a container runtime for podman or docker. It's a good hybrid between VMs and containers. The container is put into sort of VM via kvm, but it does not supply a kernel and talks to a fake one. This means that security boundary is almost as strong as VM, but mostly everything will work like in a normal container.
E.g. here's I can read host filesystem even though uname says weird things about the kernel container is running in:
$ sudo podman run -it --runtime=/usr/bin/runsc_wrap -v /:/app debian:bookworm /bin/bash
root@7862d7c432b4:/# ls /app
bin home lib32 mnt run tmp vmlinuz.old
boot initrd.img lib64 opt sbin usr
dev initrd.img.old lost+found proc srv var
etc lib media root sys vmlinuz
root@7862d7c432b4:/# uname -a
Linux 7862d7c432b4 4.4.0 #1 SMP Sun Jan 10 15:06:54 PST 2016 x86_64 GNU/Linux
Gvisor let's one have strong sandbox without resorting to WASM.
Yes, but note the difficulty of building a specialized I/O or drivers for controlling access in a virtual machine versus the WASI model.
Also, startup times are generally better w/ availability of general metering (fuel/epochs) for example. The features of Wasm versus a virtual machine are similar but there are definitely unique benefits to Wasm.
The closer comparison is probably the JVM -- but with support for many more languages (the list is growing, with upstream support commonplace).
Except developers have consistently chosen not to embed the JVM, CLR, or IBMi.
wasmtime (the current reference runtime implementation) is much more embeddable than these other options were/are, and is trivially embeddable in many languages today, with good performance. On top of being an option, it is being used, and WebAssembly is spreading cross-language, farther than the alternatives ever reached.
These things may look the same, but just like ssh/scp and dropbox, they're not the same once you zoom/dig in to what's different this time.
> As if developers are consistently chosing to embedd WASM, just wait after the hype cycle dies.
>
> What we have now is lots of hype, mostly by folks clueless of their history, in the venture to sell their cool startup idea based on WASM.
I don't think there's much of a hype cycle -- most of the air has been sucked out of the room by AI.
There aren't actually that many Wasm startups, but there are companies leveraging it to great success, and some of these cases are known. There is also the usefulness of Wasm as a target, and that is growing -- languages are choosing to build in the ability to generate wasm bytecode, just as they might support a new architecture. That's the most important part that other solutions seemingly never achieved.
The ecosystem is aiming for a least-changes-necessary approach -- integrating in a way that workflows and existing code does not have to change. This is a recipe for success.
I think it's a docker-shaped adoption curve -- most people may not think it is useful now, but it will silently and usefully be everywhere later. At some point, it will be trivial to ship a small WASM binary inside (or independent of) a container, and that will be much more desirable than building a container. The artifact will be smaller, more self-describing, work with language tooling (i.e. a world without Dockerfiles), etc.
I believe that the two most likely futures for the wasm-as-puglin-engine are mod in games and applications with a generic extension interface.
IMO in games developers would prefer something with a reasonable repl like lua or javascript (as a game is already assumed to be heavy if the mods are not performance critical running a V8 should not be a problem) for extensions in generic complex applications (things like, VSCode, Blender, Excel, etc.) I would posit that the wasm sandbox could be a really good way to enable granular-permission secure extenstion.
I don't understand WASM, but I read that a big draw of WASM is it's ability to provide portability to any language. This would mean Python libraries that depend on an unpopular C library (which could be lost to time) could instead be a single WASM blob.
Assuming equivalent performance, which I understand might not be the case, is there merit to this idea? Or is there nothing new WASM provides?
It does currently present a lot of restrictions as compared to what you could do in a container. But it's good enough to run lots of real world stuff today.
> I don't understand WASM, but I read that a big draw of WASM is it's ability to provide portability to any language. This would mean Python libraries that depend on an unpopular C library (which could be lost to time) could instead be a single WASM blob.
Yes, this is a key value of WebAssembly compared to other approaches, it is a relatively (compared to a container or a full blown VM) lightweight way to package and distribute functionality from other languages, with high performance and fast startup. The artifact is minimal (like a static/dynamic library, depending on how much you've included), and if your language has a way to run WASM, you have a way to tap into that specialized computation.
I could be wrong, but I can't find anything about how to include your C dependencies with IronPython when you compile. Instead I see that IronPython has limited compatibility with the Python ecosystem because of Python libraries using C.
Contrasted with WASM where you can write in any language and bring the ecosystem with you, since it all compiles down.
Fully agree with your point here, but wanted to point out that including C dependencies is actually one of the biggest reasons why Python support is hard for WebAssembly too.
Bolstering your point -- smart-and-hardworking people are working on this, which results in:
It's a fun ecosystem -- the challenge is huge but the work being done is really fundamentally clean/high quality, and the solutions are novel and useful/powerful.
I think that looking at it in terms of Embeddability is more useful compared to portability.
In the sense that compiling C to any language is easily done without too many problems, what wasm allow is to have a secure and performant interface with that language.
For example IIRC one of the first inclusions of wasm was to sandbox many of the various codecs that had regular security vulnerabilities, in this Wasm is neither the first nor the only approach, but with a combination of hype and simplicity it is having good success.
I imagine this gets amplified in a large org. The docs are lacking, people might not read them anyway, and you get an explosion of people who don't understand very much but still have a job to do.
reply